Wednesday, November 14, 2007

Coloring Transformers toys

When Hasbro makes a new Transformers toy they will release at minimum two versions of the toy. As these toys get released there are the inevitable debate about the paint scheme, what would have worked better etc. Why not make a website where users can help select paint schemes? The webpage could have a model of the toy and have some basic information about the toy such as what is typically mirrored on the left and right side and any basic breakdowns such as where the hands end and the arms begin. Pretty much anywhere on the toy where it would typically not be expected to have painted with different colors. Now with that up and running generate a half dozen paint jobs for the toy randomly. The user on the website can select the paint jobs that it likes which would be the fitness function for a genetic algorithm on the paint job. With enough users (and there are plenty) you should be able to generate some really nice paint schemes as suggestions for the second (and sometimes third or fourth) versions of the toy.

Tuesday, October 9, 2007

Can anything ever topple ebay?

Is it possible that ebay can ever be overthrown? Perhaps with a mashup of the smaller auction sites or a completely distributed auction site. You would think there would be more competition then there is today. ebay itself is a pretty horrible web application. Off the top of my head here are some features they could provide to me the auction shopper:

User tagging: I see an auction for a 1984 Transformers Optimus Prime with trailer, but it doesn't contain any of those words it it, instead it was titled (misspelled) "Transfomers lot 5". I would *love* to just slap a few tags on the auction to help out others (I already have him). The wiki editor in me just screams out to correct basic errors and missing information.

Price history: A bit of a stock ticker, but once you have tags why not provide a graph of previous auctions end price with the same (set or subset) of tags.

Find me good deals: You know my history so automatically highlight auctions that have really good buy it nows. Not wanting to miss out on a deal I will probably click the buy it now right then and there.

When to browse: There is plenty of time information that isn't presented. As a shopper: Tell me what time people post new auctions the most and I will look at the new auctions then for deals. As a seller: Tell me what time people visit the site the most and I will start my auctions then.

Help me spend my money: ebay provides its pulse pages, but they are crude and limiting. Why not show me: what auctions are people fighting over? Why not search for auctions that have bids already (probably interesting). Why not let me see what auctions people click on the most? When I am done looking for the one item that I thought to look up provide like to other auctions I might like. Use any basic recommendation engine and get what Amazon had ten years ago.

Images: Scanning through auctions if they don't have images I probably wont click on them which means everyone looses out. I don't get something I might have wanted. The buyer gets less viewers and ebay gets less money because fewer people fight over it. Sure images might earn ebay 5 cents from the seller (if it sells), but is it worth the lost revenue? (maybe it earns them a lot, but is sure makes me feel like I am not the customer, they are)

blog-embedding: Blogs like to embed youtube videos, so why can't they embed ebay auctions? Supposedly this is in the works, but it has been a hell of a time coming.

I can think of ten more and you can too probably. ebay has a monopoly so they don't have too much incentive to make all of these things happen. And on the flip side it is extreamly hard for anyone to enter the market because to make an auction work you needs buyers and to get buyers you need sellers. A nasty catch 22. The only way I can think of solving it is to first make a very nitch auction site, say for digital cameras or transformers or something small. Make it free and target that market until you are known and successful for it and then move on. Of course then we end up with just another ebay so I don't like that too much either.

Another route would be to create a mashup of all the auction sites. There are plenty of ebay copy cats (even the ugly layout) struggling to get by (why they don't go nitch I don't know) Combine their auctions with ebays and provide a lot of tools and features for users would actually want so that they will want to browse ebay on your site rather then ebay itself. But you are limited to what you can provide and it will still take a very long time (if ever) to take down ebay.

Stepping completely out of the box how about this:
Many of the sellers on ebay are full time people who have their own websites (but not so big as to be a yahoo store). Create a specification that can be use to list auctions hosted on any web site. This could be perhaps as simple as having them create an atom feed. Included in the feed would be a link to the auction house where you can bid on it. The auction house would be the place where you would have to actually log in to bid on the auction. A 3rd party non profit would hold the login information for all of the auction houses (think how google is to all the google apps for login) so users can log into any auction house (no pain for them). With the feeds any auction house could layer features on top such as tagging and history. Auction houses would complete to host the bidding. How would they compete? By drawing in users. How do you draw in users? Create good user interfaces and continue to come up with new features and services that provide value to shoppers. So now you have an easy way for stores to post auctions (even to ebay!) and a way for anyone to make an auction house and begin adding features that users want.

Thursday, October 4, 2007

The most underutilized part of programs: what happened in the past

When sending a sms on my cell phone and I select the person it is going to when the address book comes up I hit "j" and it then only displays names that start with "j". Even though 99% of the time I select my wife "Jen" it never pre-selects it by default, but the first one in the list.

When running a set of auto-tests remember what has failed before and runs them first.

Exiting examples (that seem almost revolutionary):

When launching your browser rather then showing a blank page show the most common page people go to.
- Opera

When compiling a projects rather than preprocessing things over and over store them for reuse on the next file to be compiled.
-precompiled headers (in some compilers, new for gcc 4)

Remember what TV shows I watched in the past and if there is free hd space and I am not doing anything and I have never seen that episode of the series automatically record it for me.
- Tivo

Friday, September 28, 2007

Bugs That Eat Your Desktop (Screensaver)

A screensaver that consists of little bugs that eat your desktop, have offspring and complete to survive.

Following disney's rules the bugs would be cute and made of a simple circle with big eyes and when moving they would bend their shape slightly. When in heat they would pulsate and as they got older their color would get darker. While sleeping they could close their eyes and ever so slightly "breath".

When the screensaver starts a set of bugs are randomly generated. They start out very small (~1px) and as they eat food (colors) they can grow in size, mate, and have offspring. As the desktop colors are eaten they slowly grow back as though the desktop was grass. The bugs movements would be controlled via a gnome and a genetic algorithm would be at the heart of the screensaver.

Each bug has a set of parameters that determine its attributes:

- What color is the bug itself?
- How fast it can move?
- How much does it cost to move?
- How big it can grow (i.e. how much energy can it store)?
- What colors it can eat?
- How far can it see and what can it see?
- How much it sleeps (does it sleep?)
- Does it reproduce sexually or asexually?
- Max time it it can live before expiring.
- How much extra energy it needs to reproduce.

The interesting aspect of the screensaver is that because it is based upon whatever is on your screen at the time that the screensaver starts different bugs will emerge every time it is run. A little bug that only walks the black border of all the windows, a big fat one that eats a green webpage and a tiny parasite bug that nibbles on bigger bugs.

Update (Nov 14th 2007): Check out this super cool video on PolyWorld.

Tuesday, September 18, 2007

web logs for desktop applications or why web applications kick desktop applications butt

When developing any application after you get the core built you begin adding features for the users. Typically you have to guess what would the users want and would use. This results in some features being essentially dead because almost no one ever uses them. There is no way to know what is *really* used V.S. what is used every once in a while.

With a web app:
- You can divide your customers in half. Give something a shot and if it works then role it out otherwise kill it.
- View the logs to see what page they leave one. Rework the text. Ad a link to the bottom.
- Follow customers through your sign on pages and see that 50% stop after page 2 and don't get to page 3
- Every page, every link, every action, even mouse movements can be tracked.

Compare this to desktop applicaiotn. Today you can only track the number of downloads or the number of CD's sold. Desktop applications need a way to log all of the user actions so that the developers can get reports about the usage.

Tuesday, September 4, 2007


Simple version:
If the only bank in the world has only $1000 and loans $200 out to five people, what happens the next year when the five people try to pay it back?

The year is 1914 and you are in charge of a new bank called the central federal bank or CFB for short. All money is now traded in "Federal Reserve Bank Note" or $ for short. The CFB can load money to other banks or the government.

At the start of the game the following are the starting values for CFB that you can change:
- Owns 10,000 tons of gold
- The bank must hold at least 40% of its outstanding loans in gold.
- $ are valued at $20 per ounce so the bank has 16 billion dollars it is allowed to trade for gold or loan out.
- Loan interest rate starts at 1%

Gold production starts at 500 tons a year and increases on average 15 tons a year.

The point of the game is to last as long as possible and to make as much as possible. You make money by giving out loans with interest (such as to the government). Giving a loan creates new notes into the system. If there are too many notes in the system there will be inflation. To much inflation and the people try to revolt and you might loose the game. Too little money in the system and many defaults on their loans causing a depression and you might loose the game.

One of the first goals of the game is to remove the FRBN from the gold standard.

The interesting part of the game is because the majority of money is eventually created from loans (with interest) the system will collapse on itself. The longer things go on, the more interest is owned. It is your job to make it last as long as possible. It is Tetris, eventually you will loose, just a question of when.

Of course games are about fun, taking the above basic parameters create challenges that start out simple, but get harder and harder as time goes on with rewards for successful completion of tasks. Collecting more and more assets before the collapse. Gaining abilities such as tweaking parameters at the banks within the system and buying political clout. Wars, depressions, getting off the gold standard, inflation, new presidents, bubbles and busts, everything that can, does happen :)

Friday, August 31, 2007

Valgrind code coverage

To obtain code coverage on tests the only real option today is to use gcov. To use gcov you must you gcc and link in a libarary into everything that is executed which can be a pain. Rather than doing that, why not just either a) write a tiny valgrind module that outputs lines touched in a file and jumps taken or b) re-use callgrind's output and then combine that data with proper parser (of your specific language) to generate code coverage reports.

Friday, August 10, 2007 web browser

Make a web browser using webkit and Qt. Host it on and incorporate anything and everything to make the browser integrate with freedesktop. Using Qt the browser would look good in Gnome and KDE with the Plastique and Cleannlooks themes. DBus support is a must, mimetype support, and more. Create a standard for storing bookmarks. Use xdg-user-dirs to place things in the Download directory. Would it work? Would it take off? Probably not, people like to bicker to much. KDE users wouldn't use it because it isn't Konqy, Gnome users would refuse to use it because it has Qt. Would it be fun to make? Yes.

Edit: So I ended up kinda doing this one: ttp:// Rather then for freedesktop it is more cross platform.

Monday, August 6, 2007

Collaborative Documentation Web Application

A web application that can take documentation, perhaps in XML or another format such as Doxygen's and display it, but the add-on value would be that it would allow users to add comments to any part of the documentation. The application would provides features that would make it very easy for the documentation writers upstream to know about new comments so that they can integrate useful ones into the documentation.

Common errors that are discovered by the users where it would be useful to have an easy way to report include:

  • Spelling mistakes

  • Typo's

  • Broken links

  • Examples that don't compile

  • Straight out docs that don't match the behavior of the function code

  • Documentation that could be worded better

  • Undocumented behavior

Beyond those users are able to provide quite a bit of value add to documentation when presented with a way to such as:

  • Example usage

  • Workarounds

  • Performance tips

PHP's website was the first I saw that had this built in feature. A good example is str_replace. The users provide a lot of extra value. The problem is that currently is for php and same goes for the few other collaborative documentation sites that are out there. Compare this to the hundreds of Doxygen generated sites out there there this is a project that would have a lot of users.

Tuesday, July 24, 2007

Making testing enjoyable with parsers

A lot of people can find writing autotests to be tedious. But once you have a parser for your code there are a number of very interesting projects that you can write to help test your code and improve your auto tests.

The faster you can find errors the less time you will spend later on fixing bugs. The more I can automate things, the more tests I will get and the better the chance that I will find bugs sooner while I am still familiar with the code.

Generate autotests

The first things you should do with the parser is write a tool that will generate a stub auto test for you. Once you have an autotest adding new ones are not that hard, but creating that first one can be hard. Don't stop at just a stub, basic tests that just call every function (to make sure they don't crash) and more are very useful to generate. After that when writing the stub that will test function X the parser should walk inside of X and automatically add comment to the test for each control flow statement. Then all that has to be done is go through and implement each comment. This should remove a lot of the guesswork about what to test.

Mutation testing

Once you have autotests you can run your parser over your code only this time have it modify the code. There are plenty of simple things such as changing if(foo) to if (!foo) that can be done with very little work. After the modification run the auto tests and see if the test still passes, in which case the auto test coverage isn't very good. Record those changes that don't cause any failures in the tests.


When adding a feature or fixing a bug one thing you want to do it to make sure that your changes didn't accidently change something that it wasn't meant to. Generate a known set of known things about your object. To start this database use your parser to call every function and record what happens. It doesn't need know if what it does is right or wrong just what it does. After making a change you can compare any differences to make sure you didn't break something you didn't mean to. The more data you can add to the database the more secure you can be in knowing that your changes didn't cause unintended regressions.

Test failure finder

I have previously written about this method here: Test failure finder.


A simpler version of the previous, test failure finder, using the parser to determine what can be called and just call the inputs with random data (from a seed) until it crashes. This is a way to test your code without having to write any code at all and can be run starting from day one.

I don't like to fix bugs, I like to code.

Saturday, July 21, 2007

Photo booth with a comment field

In OS X there is an application Photo booth. It would be nice if your was a very easy way to add comments to the image that would be saved into the images exif data. Then you could add comments about who was in the photo, thoughts etc.

Make an application as simple as Photo Booth (for images or video), but with the ability to add comments and then upload it to flicker or youtube.

Thursday, July 19, 2007

Valgrind memory measuring done better

Create a memory tool similar to KCacheGrind.

Valgrind is a fantastic application with several built in tools. One of the tools is massif, a heap profiler. While nice this tool was quickly put together and leaves you wanting more. It generates two output formats. The first is a pdf file. The data is squished to fit onto the pdf size, colors are reused and in general it is just hard to read. The second format is txt/html. The txt is near worthless, but the hmtl is at least a little bit usefull. With enough time and effort you can get some good information out of it. At the end of the day massif only touches the tip of the iceberg as far as features go.

Change massif to generate a data file similar to callgrind where you get a complete stack trace every time memory is allocated or deallocated. Then a rich tool can be created that can graph the data, you can select the any allocation and get details on what it is and where it was allocated.

Another tool could be run as the data is generating showing a graph of memory usage so when you do something in your application and the graph spikes up you would immediately see it.

A tool like this is missing in open source.

git in a word processor

Combine git with a word processor transparently to provide a editor that is leaps and bounds beyond what any word processor provides today as far as versioning and branching goes. Be able to share docs and actually merge them.

Wednesday, July 18, 2007

Make your own social bookmarking site

Combine and to get a site where you can create forum/social bookmark sites about any topic you want in two minutes. is a good site. A bit of a forum site, a bit of a social network site. Unfortunately it has grown in size the as it has grown the content that is found on reddit is more general and there is a lot of group think. Overnight there are dozens of articles about impeaching the president or electing Ron Paul or some random big news story of the week. The more users that sign onto redit the more general the topics become. Breaking news is much more likely to survive then a link to a published science paper no matter how good it is. About a year ago reddit started getting kitten pictures and today many programming articles don't make it to the front page anymore.

With tags and filters one would be able to ignore the latest reddit fad, but there is something that tags could never fix. If you submit an article about an interview with a comic artist there is probably more people on reddit who will mod it down then up. Now if you were to go to a comic forum/social site that same article might be the top article and debated in depth. Topics that are not in the mainstream, but nitch don't work too well on social bookmarking sites like reddit. Many people have taken the approach of created there own reddit/digg clone specifically for a nitch. lets you submit rumors and lets users vote on pre-identified blogs (including themselves) are just two such clones and there are many more. Some of these sites have been coded from the ground up, but there is starting to be programs you can download and in install to get a digg clone. This is just the next generation of the bulletin boards from ten years ago. You still have to own the domain, hosting and perform all the maintenance on the site. provides a nice service, before typically you really had to setup your own domain and have your own blog, but after that in five minutes flat you could have a blog. Heck you could have ten blogs about ten topics if you wanted just as easy. Now if you were to have a site like, but rather then making blogs, it would let you create reddit clones. Create a social bookmark site around any nitch you want. If there is a community then it survives and if not, well it was only a few minutes of work to set up.

Slashdot, K5, digg, reddit, one by one they have risen and then what was once so nice, fills up with junk. The community moves on to a new social site. Well they don't move overnight, first someone has to make a new site for them to move to. With this social site overnight a new community could be created. A community can fracture in a nice way. When Ron Paul became big anyone could have overnight made a Ron Paul social bookmarking site where every single link could be posted and appreciated rather then the slow death on reddit where eventually everyone was voting down Ron Paul articles, not because they were bad, but simply because they were about Ron Paul.

The idea that reddit brings is good, good enough to be bought, but bigger then them would be a site that lets you make sites, thats a site that Google would buy.

Update: I have informed of one such site that is trying to do just this (and looks quite reddity):

Update: And another site and a site that just for comments:

Web browser on a canvas

Take the browser window and slap it in a canvas. By default it could look like Firefox/Safari etc, but unlike them you can zoom out to find a graph showing your history. If I started out at page A and then click on B and then opened a new tab AA when I zoomed out to the graph view I would see the following:

[B] [AA]

Scrolling up and down there could be lines highlighting the hour/day change making it easy to find where you were. All pages that you visit in your history will be indexed and be searchable. Typing in searching will hide nodes that don't apply leaving the remaining ones that are left easy to browse around.

Because the browser is built on top of the canvas animations and transitions will be very easy to implement. Features that you have seen on the iPhone and in the newer 3d window managers can be done with ease. Going back could have the page scroll off the bottom of the screen and the previous site scroll down and subtly "bounce" into the window.

Developers could add all sorts of features, hit and key and like OS X's dashboard you can flip around a webpage and dissect it, information nodes/boxes could pop off all sides and with zooming in and out of the canvas there is plenty of room for all sorts of addons and details about the site without having to sacrifice because of space.

Or when a webpage is flipped over there would be all the user addons such as the ability to list what filters apply to that site, if cookies should be allowed, etc

Update: Check out what Zack has done with wekit on a canvas.

Wednesday, July 11, 2007

Transformers stock graph

With eBay publishing its auctions via rss, you could create "stock" graph of Transformers toys. It would be interesting to see the price of items rise or fall over time, spiking at Christmas perhaps etc. Of course Transformers are just the tip of the iceberg, it would be interesting to see the graphs of many hot items.

Scale with a memory card

How about a weight scale that you could put a CF or SD card into. Every time it measures the weight it records the time and amount. Then you will get a nice easy and simple method of keeping track of what you weight that can very easily be turned into graphs etc.

Update: Scale with bluetooth!

Update #2 cheaper wifi model:

Tuesday, June 19, 2007

Calculator Wars!

Who wants a boring calculator that just shows you your answers when you could have it shown to you in an epic battle?

On the top half of the screen is an empty field and beneath it is a empty line edit. When you type "3" into the line edit three blue knights appear on the left hand of the screen. You continue typing "+ 2" and two red knights appear on the right hand side of the screen. Hitting the enter key the knights all trot into the middle of the screen and the colors turn yellow giving you five yellow knights.

Using other operations makes for a more interesting view. "3 - 2" will result in the knight fighting until there is only one left. The same goes for division. Multiplication is a bit more tricky. The family friendly version could have the knights calling for re-enforcement's while the adult version could use animals rather then knights and they would just multiply after a quick climb over each other.

When you add parenthesis the groups would be divided up and you could watch the different divisions attacking to see which color survives in the end. It could either do a random all for one mess that results in the correct answer or the divisions attack in the proper mathimatical order.

This is all about eye candy so there would be many different types of creatures. Subtraction could be Trolls that come over and club or even have arial planes that bomb before flying off screen. Given a little intelligence it would be a lot of fun to watch. The battles didn't need to be short, a long calculation with big numbers could take a long time.

From the Square One TV show, a skit with a similar premise as Calculator wars :)

Thursday, April 19, 2007

concurrent javascript

How about running Javascript code in parallel automatically?

  1. Take javascript code A
  2. Parse code A into a code tree and output into LISP
  3. Parse LISP and separate into separate blocks that can be run in parallel with no worry, insert map reduce code to be expanded
  4. Use a LISP to javascript to generate javascript code which now contains map reduce calls.
  5. execute javascript inside interpreter that has map reduce library hooks.

It might work, it might not. Fun to ponder.