I love the Internet
So, so good in so many ways. I want more.
So, so good in so many ways. I want more.
I've been doing a lot of work in Ruby on Rails lately and absolutely love it. In the past few days I've needed to port an existing database over to a new schema for comparisons and benchmarking. It took a bit of digging to find what turned out to be a typical RoR (read: elegant) answer to the related problem of maintaining connections to multiple databases in Rails, so I thought I'd offer a link to Dr Nic here.
I had a need recently to convert a .bin/.cue CD Image pair to .iso for mounting on OSX. I was considering writing a quick utility to handle the task, but in the process of researching the file formats, I found BinChunker, a GPL-licensed piece of code that does exactly what I need, simply and directly. The official site has the source code and RedHat RPM's, but if you are on OSX, I did a quick compile of the latest version which you can download here.
Once you download the utility, issue this command from a shell prompt in the directory where you downloaded the file:
sudo cp bchunk /usr/bin/
This will copy the file into a location where the system can find it at will (a.k.a. the path). Then, to convert a .bin/.cue pair to a .iso, you can issue this command:
bchunk myinputfile.bin myinputfile.cue myoutputfile
Short, sweet, and simple — and lightweight too, weighing in at only 20k.
UPDATE: As commenter Frederik has pointed out, this can give a permission denied error if your user account does not have execute permissions on the file. Execute this command after copying the file to /usr/bin/ to solve this problem:
sudo chmod a+x /usr/bin/bchunk
If you are getting a not found error, make sure that /usr/bin/ is in your path. To check this, type
echo $PATH and look for /usr/bin/ in the result. If it isn't there, type
sudo nano /etc/profile and add
/usr/bin; to the
PATH=... line. Then press
CTRL+x followed by
Y to confirm and the enter key to verify the filename to save and exit nano. Then execute
source /etc/profile to refresh the path.
One of the technological highlights of TED was Microsoft's Photosynth. A way of mapping user-submitted photos to 3D models as a novel way of photo-mapping the world, it was absolutely stunning. The demo, linked above, runs in IE or Firefox, but only on Windows, but it's worth the trouble, both as an actual experience and as a hint of what's to come. The presenters had a great attitude regarding the common anti-MS sentiment of the world ("Who would ever have thought that there would be a Microsoft talk at a session called 'Simplicity'?"), and once they started showing their work, they clearly had no reason for concern. Take a look for yourself and be impressed.
I recently came across a gifted copy of Windows Vista Ultimate. As it turns out, I needed a copy of Windows for some CAD work on my MacBook Pro, so I figured I'd give it a shot. Vista isn't officially supported by Apple's Boot Camp, but after a bit of Googling, it seemed relatively safe, so I continued.
Installation was flawless. I inserted the Vista CD when Boot Camp asked for XP, and everything proceeded smoothly. It is a big installation though, consuming almost half of the 20GB I allotted to it. I followed these instructions to get the Apple drivers installed (not seamless, but it works) and everything is up and running.
I've only been using it for a few days, but all in all, I'm rather impressed. As a relatively recent Windows to OSX convert, the interface isn't so bad. I still prefer the Mac in terms of usability, but I have to admit, Vista is sort of pretty. It is hard though to miss the tail chasing that's going on here by Microsoft. The new Windows menu file system layout is oddly reminiscent of Finder. Even one of the bundled screensavers is a pretty apparent clone of the default OSX saver. The new desktop modules basically put Dashboard on the desktop, and the new Aero window management features add some 3D eye candy to Expose, albeit at the expense of hot corners — not a good tradeoff for my habits, but admittedly pretty. It almost makes me wonder, do the Windows designers run OSX at home? Either way, to me, a little more Apple flavor in Windows is a welcome addition, but not necessarily a source of honor and pride for Microsoft.
IE7 also copies its killer app from Safari: both are best used to download FireFox. IE7 does have one nice feature that I've noticed in the few minutes I've used it though: you can spawn a new tab by clicking a little stub button on the tab bar. A nice feature, although this Firefox extension does the trick as well.
Performance-wise, I'm satisfied. The new visual effects run smoothly on the MBP, although it should — the Pro has an ATI Radeon X1600 GPU with 256MB of dedicated graphics RAM. CAD apps possibly marginally less snappy than with a barren XP install, but all in all very usable, even in cases where the apps aren't officially Vista-rated.
I first installed Vista under Parallels in Mac OSX. This worked, but in the interest of saving space and not having redundant installs, I deleted the image to install it in a separate Boot Camp partition. After doing this, I found that the current version of Parallels doesn't support booting from a Vista Boot Camp partition. I'm looking forward to this feature, as it's nice to be able to quickly jump into Windows, but have the option for a full boot for more demanding apps like 3D CAD.
Would I pay $400 for Vista Ultimate? Probably not, unless I absolutely had to use a Windows-only application. It's nice, but so is OSX and for that matter Ubuntu, given enough patience and skill in setup and configuration. That leads to what is, for me, really the biggest problem with Windows these days: I miss the UNIX console. Having to download a separate ssh client and install my own scp seems completely unreasonable, especially considering the 9GB+ install footprint with everything but the kitchen sink (and UNIX terminal standards) thrown in.
I look forward to the day when Ubuntu and other Linux distros truly reach consumer-ready status, and that day is coming. Even today though, it blows my mind to see kiosks proudly displaying the blue screen of death. I would never pay thousands of dollars for Windows licenses for something like subway car displays or even Times Square signage when the simple GUI and configuration arguments should realistically be thrown out the window, much like they are in most of the web servers of the world, in favor of reduced cost and increased stability.
To continue to thumb its nose at the Linux community's technology is Microsoft's mistake, and recently I've started to think it would be a fatal one. Vista continues Microsoft's commitment to this grand mistake, but also shows that they still have some fight to bring to the ring.
Maybe there's a correlation to the fact that I'm not a big Twitter fan and the fact that never in my life have I used
wall. And that, on the other hand,
find / | grep is always right at my fingertips.
From 'sfearthquakes' on Twitter: By Marc Hedlund
One of my favorite business model suggestions for entrepreneurs is, find an old UNIX command that hasn't yet been implemented on the web, and fix that. talk and finger became ICQ, LISTSERV became Yahoo! Groups, ls became (the original) Yahoo!, find and grep became Google, rn became Bloglines, pine became Gmail, mount is becoming S3, and bash is becoming Yahoo! Pipes. I didn't get until tonight that Twitter is wall for the web. I love that.
(Via O'Reilly Radar.)
There's something beautifully ironic about using an open process for reviewing the protection of intellectual property. But hey, why not? Seeing the USPTO jump on the wiki wagon should be a pretty definitive sign that the rules are indeed changing and that's an exciting thing.
What's really going to be interesting though is seeing if those at the helm of this project has what it takes to finesse a community in the face of the gaming that will inevitably occur in a system where the potential stakes are so high. It's good to hear that Malda et. al. are being consulted, but only time will really tell if they can overcome the outside pressures. I suspect that the fact that what is considered "gaming" on digg.com will legally be considered "fraud" on uspto.gov will help, but clearly legal enforcement can only play part of the solution.
All that aside, most of all I'm impressed with the relative timeliness of the USPTO's experiment, given the usual lag where government and technology overlap. Maybe understaffing isn't always such a bad thing.
From USPTO Peer Review Process To Begin Soon: "An anonymous reader writes 'As we've discussed several times before on Slashdot, the US patent office is looking to employ a Wiki-like process for reviewing patents. It's nowhere near as open as Wikipedia, but there are still numerous comparisons drawn to the well-known project in this Washington Post story. Patent office officials site the huge workload their case officers must deal with in order to handle the modern cycle of product development. Last year some 332,000 applications were handled by only 4,000 employees. 'The tremendous workload has often left examiners with little time to conduct thorough reviews, according to sympathetic critics. Under the pilot project, some companies submitting patent applications will agree to have them reviewed via the Internet. The list of volunteers already contains some of the most prominent names in computing, including Microsoft, Intel, Hewlett-Packard and Oracle, as well as IBM, though other applicants are welcome.''
I was doing the first of many parts orders from DigiKey, a massive online supplier of electronic components, for the semester, and finally decided to make an OpenSearch plugin for Firefox. This lets you search for DigiKey parts from the search bar in the upper-right corner of OpenSearch compatible browsers. I've only tested this in Firefox 220.127.116.11, and I'm pretty sure that it won't work in IE7 (surprise!) because IE7 doesn't support POST requests for OpenSearch plugins. Get Firefox.
The format for creating these plugins is a very straightforward XML file. You can find the docs on how to create your own on developer.mozilla.org.
NOTE: This plugin is in no way supported by the Digi-Key corporation. Please do not bother them with help requests if it doesn't work properly (probably because you are running IE). Contact the author with any questions or problems - at which time I will probably just tell you to Get Firefox.
The top 10 locations of the readers of this blog (since 4/2006):
Tired of burning extra CPU cycles by folding proteins and looking for E.T.? Try climateprediction.net, brought to you on BOINC, the Berkeley Open Infrastructure for Network Computing and the same platform that backs SETI@Home. From the official site:
What is climateprediction.net?
Climateprediction.net is the largest experiment to try and produce a forecast of the climate in the 21st century. To do this, we need people around the world to give us time on their computers - time when they have their computers switched on, but are not using them to their full capacity.
Climate change, and our response to it, are issues of global importance, affecting food production, water resources, ecosystems, energy demand, insurance costs and much else. There is a broad scientific consensus that the Earth will probably warm over the coming century; climateprediction.net should, for the first time, tell us what is most likely to happen.
Windows and Linux users can get started here. Mac OSX users will have to use the beta for now (I've been running it and it seems solid so far). I created an ITP team both for the regular version and the beta. The team names are both ITP and the team IDs are 6006 and 35, respectively.
Here are some captures of the rendered screensaver graphics:
One of the best things about having a blog is obsessively checking your statistics, which for me, currently a Google Analytics user, means opening a browser and logging in just to see the most recent counts for the day. Enter Dashalytics, a nice OSX Dashboard widget that shows a quick summary of your stats at the flick of the wrist. The clean design includes an appropriate amount of info for a quick glance and a link into the full Analytics page should you want to delve deeper.
Today I finally gave my RSS Newsreader (NetNewsWire) a much needed cleanup, deleting at least most of the feeds which I haven't touched in months. I've been wanting to develop something simple to help people share their feed lists for awhile, so this cleanup inspired me to at least whip up a quick page to convert the OPML feed list to HTML for posting.
It's pretty self-explanatory once you get the OPML file out of your newsreader.
File->Export Subscriptions does the trick in NetNewsWire (other readers should have something similar), and the script should work with either flat OPML or with the group information included, although it's anything but extensively tested.
If there's any interest, maybe I'll whip up a quick site to compile the feed lists from the OPML files. In any case, here's my reading list at the moment:
To start, I haven't yet read Chris Anderson's The Long Tail, but I have seen him speak on the subject. So to some degree this is informed by Mr. Anderson's views, but if this feels like a chapter from the book, let me apologize in advance and claim independent invention.
As many of you probably are already aware, the 'long tail' refers to a section of the power law distribution which comes up in countless aspects of our world — very commonly specifically referring to media distribution, wherein a 'chosen few' make up a disproportionately large share of sales. The long tail is the huge number of items which each have a small number of sales. More specifically, the concept points to the fact that as modes of distribution change for largely technological reasons, the hits (think Britney and Star Wars) are becoming less important and indie pieces and cult classics out on the tail of the curve are selling more and becoming more relevant.
If you don't believe this, go see Mr. Anderson speak, or I would presume that you could just read his book as well. You could also find a quick introduction to the topic on Wikipedia. I think it's pretty clear that things are changing, and that most arguments on the topic will take place over the degree of change and its implications, not the presence thereof. Remember Tower Records?
Rosy-eyed and inspired by the promise of a new world of our own creation, in the beginning I saw only the upsides to this trend. Isn't it fantastic, I thought, now I can finally escape those lousy radio singles and hollow Hollywood action flicks and find media with real substance, something that really speaks to me. And if I can't find it, I can always just roll my own.
I still largely do think it's great actually, particularly the user-generated aspects, but I'm starting to see a big potential cultural downside. As we have more choices across the board, that means a denser distribution along almost any axis of view: more hardcore punk, more Gelugpa chanting, more documentaries about peanut farming, you name it — just more.
Again, isn't this great? Well, at first glance it is, at least through the idealist's lens that would tell us that given all of this wide and varied information that we will graze across it, gobbling up wide and varied cross-section of opinions, knowledge, and inspiration.
But is that really what we will do?
Signs point to no. In online communities, I can't see a lot of evidence that Air America fans are drifting over to Fox News or Ann Coulter for a little balance. They might, however, nominate their favorite liberal blog for an award. Or vice versa.
When we're given an all we can eat information buffet, it seems that we tend to just stuff ourselves on the same old meat and potatoes we're used to, while ignoring that wide diversity that brought us to the table in the first place. orgnet.com has an interesting piece called Political Books and Polarized Readers that analyzes the 'also bought' data from Amazon to show this effect in sharp relief.
And so then, instead of just measuring our increasing engagement in a broadening scope of opportunities, the growth in the long tail is actually fueled in large part by a narrowing of individual focus. When we read, hear, or watch something we like or agree with, we can now hunt down more of the same, almost effortlessly. And few, if any, of us can resist the temptation of being told over and over again that we are absolutely and completely right.
So the tail grows and grows, as we snatch up long lost import singles and director's cuts and books that express the same opinions as that last book we liked so much. And we are happy, but perhaps not fulfilled.
Throw in an effective recommendation system of the future and now you've really got a problem, not because it won't work, but because it will. Given an infinitely long tail, you can find an infinite number of works that align with any narrow point of view (exaggeration to be sure, but within the scope of our media consumption capabilities, not excessively so). And how to browse an infinite catalog but through an innovative recommendation system? But then given that perfect system it will know that since I loved that Bill O'Reilly book so much that I must want nothing more than books by a selection of Bill O'Reilly clones. And I probably do — or at least I'll gobble them up happily if that's all I see.
With all of this confirmation of our viewpoints, what do we get but a polarized world where each side shares little but an adherence to our opinions that borders upon the religious? And think not of a two party system of disagreement, but of a hectagon where each side, though small, can be just as polarized and isolated from the rest.
At least at some ranges of scale, the value of media as a whole grows with the number of options presented. Television is worth much more with two channels with one, and more still with ten channels or fifty. I believe that in an ideal world this trend has the chance to continue onward to infinity. It's up to us as consumers and especially as technologists to attempt to continue to create and extract this added value.
There's a world of information out there — use it. Not just to read a rehash of that same blog post you just read six times in theme and variations, but instead to truly expand your horizons. We all would do well to expose ourselves to the other side from time to time. In the worst case, we are better informed and prepared to discuss or argue for our side, and in the best case we might learn something truly profound that shifts our viewpoint entirely.
On the technologists' side, Wikipedia comes to mind (as it so often does) as a good example of a structure that can encourage this kind of growth. Even ignoring the fact that it is user-generated, simply through diverse content and dense hyperlinking, I find it almost impossible to read about just one topic on any given visit, and often these journeys lead to surprisingly diverse content even after only a few links. Recommendation systems will need to be designed with these thoughts in mind and encourage us to learn, not just buy the same old comfortable materials and tired entertainment, while keeping enough comfort and familiarity to maintain market share and thus relevance. How about a reco system with a 'how crazy are you feeling today?' slider that lets you fine tune the amount of diversity that suits your current mood?
I do believe that the exploding mass of content available online and otherwise is an inherently good thing at its core, but we need ways to manage it efficiently and most importantly we need to be of the mind to use it responsibly and effectively. Mental laziness disguised as a voracious appetite for learning (the same thing over and over again) is nothing strange in this new world, so let us not become victims of this masquerader.
Sorting by popularity so that the favorites of the group float to the top achieves little more than what the 'old media' has been doing for decades: letting the majority decide for the diverse minorities. Conversely, as collaborative filtering systems improve, there will be a point where increased absolute filtering performance will only serve to amplify the echo, as our individual past is projected forward to become our future, preventing us from growing and expanding mentally and philosophically. Between and along the edges of these regions lies a land of great need and opportunity.
NewsViz is an application I am working on for Mainstreaming Information, a class on information visualization at ITP. This is an early prototype, but the goal is to allow people to easily compare the outputs of various news outlets in order to compare and contrast the "facts" from each.
There are many additions and refinements on the way, including improved navigation, visualization of relationships between keywords, and the ability to easily navigate to the full content of any story, but in the meantime, feel free to take it for a spin and contact me with any comments or suggestions.
I saw this story from the Times this morning on Digg and just now happened onto the surface of an interesting resultant discussion on Alex's blog about the relative dangers of corporate vs. governmental regulation:
Cat and I have an ongoing discussion on whether corporate vs. governmental regulation is more dangerous. She leans towards the corporate being the lesser evil, where I believe corporate regulation (that is, companies deciding policy, disregarding lobbying) is the greater evil.
Visa just announced their intention to block payments to the Russian music download site AllofMP3. AllofMP3 insists upon its legality in terms of Russian copyright law—but has promised a change in its business model, hoping for more international acceptance (of course, this will come at a price, as the downloaders are quite happy with the current system).
But is Visa's extreme measure to block payments to AllofMP3 acceptable? As a digg poster commented, Since when has it been Visa's obligation to judge business morality? While I believe that businesses should have models of moral obligation, decisions such as these should be questioned by general consumers and more closely examined by relevant subscribers. Business policies, and corporate morality policies should be easily available, and digestible to consumers, subscribers, anyone who wants them, really—public accessibility is key. Archives of past business and policy movements should be equally accessible.
My personal position is that on strict philosophical grounds, companies and governments are in the position to, and in fact are expected to, make moral judgements. Industry not dumping toxic waste into rivers or bars not serving liquor to six year olds are two relatively simplistic cases where government and corporate regulation overlap. Both may be against government regulations, but in either case, even if there weren't regulations in place, one would hope that the public as a whole would demand some level of responsibility on the part of the businesses.
The problem is when governments and corporations make seemingly moralist decisions that alienate their users, either by preventing them from doing things which they believe harmless or simply forcing them to align by association with platforms with which they disagree.
Is AllofMp3.com legal? I don't know. I'm not a lawyer, and I don't know where the money goes once I send it to Russia. However, if I like cheap music and Visa is "Everywhere I Want to Be", then something doesn't add up.
This, however, is where governments and corporations differ, even if they are holding hands under the table. Visa is an independent entity, and as such is free to make reactionary, moralist decisions, even if misguided. And as soon as they alienate enough of their customers, you can bet there will be a new player on the market, with lower interest rates and better service to soak up the profits. Capitalism can be ugly at times, but in its ability to turn sheer greed into a positive impetus, it's magical.
To some extent, the same is true of government, for which we must be eternally grateful. However, there are a couple of major differences. First, it's hard to perform a wholesale change of government without bloodshed. If Sprint pisses me off, I can always switch to Cingular, but if George Bush pisses me off, the only things I can really do are complain, leave the country, or start a revolution, none of which manage to be particularly attractive or effective. There is impeachment, but historically that is roughly as common at the presidential level as revolution.
The other difference is that, truth be told, the first responsibility of an American business is to its own interests, while by the best definition we have, the first responsibility of the American government is to our interests. Now where the situation gets murky of course is that in order to protect its own interests, a company must also protect its customers interests. But that is a secondary objective, not the primary. So finally, here's the point. I don't think Visa is evil.
I think Visa is wrong.
In doing what they feel necessary to in some way save their own asses (and I don't want to hear the same people who preach how moral-less and greedy corporations are now all of a sudden decide that they are driven by some arbitrary moral agenda), be it from future regulatory action from the government, future lawsuits from the RIAA, or future backlash from customers that sincerely respect the value of copyright (doubtful), Visa has continued what could easily turn into a major miscue. Conservative policy and mediocrity are the bane of passionate users.
This isn't the first time credit processors have made a moral distinction. The vast majority of credit card issuers haven't served Internet gambling sites for almost as long as there have been Internet gambling sites. An entire industry has formed largely around this decision. Companies like FirePay and Neteller stepped in to fill the void (and make a substantial profit) almost immediately. And now, due to increased scrutiny by the U.S. government and, of comparable relevance, the fact that these two startups have grown into large, publicly traded corporations, they are on the brink of stepping down as the feeding tube for Internet gambling.
But what will happen? The gambling sites will not be starved by any stretch of the imagination. The current payment processors will either reconsider their stance or fade into the background, largely marginalized without the differential from traditional payment services (like Visa) created by the very black and white willingness and non-willingness to accept gambling transactions, and newcomers will step up to fill the void. No amount of legislation or threats of prison time or even death can keep an entire world of people from seizing an opportunity to make quick millions. Whatever the potential penalty, there will be a supplier — just not a publicly traded one.
The problem from Visa's point of view is that It's easy to fall into the trap that only a few people are hurt by any of these decisions. First, each group is much larger than one might think, and secondly, the disenfranchised groups add very quickly and tend to be vocal. It's much like the downfall of President Bush's approval rating. A chip at a time, one small group of Americans at a time, he lost support, all the while failing to realize or at least care that he was sliding down the side of a mountain from a pinnacle of popularity to the current widespread disapproval. That's what will happen to Visa if this trend continues. If Visa suddenly decided that drinking is bad and that Visa will not be accepted in establishments that serve alcohol, what do you think would happen to their membership?
This feedback loop is key to maintaining control over both government and private interests, but the public can only react to what is seen. And there are two equally important sides to this absorbtion of information — production and consumption. If either side is lacking, this massively parallel system of checks and balance breaks down. The tendency is that peoples' alarms are triggered and things change before it is too late. But admittedly, that works every time...until it doesn't.
Pay attention, and if you do nothing else — think.
The upside to all of this? I visited allofmp3.com and at first glance I have to respect the approach. The site design is adequate, but the interesting part, and the part I would like to see adopted elsewhere, is that you pay for what you get, in terms of quality. Well, lets say bitrate. You can judge the quality of the music for yourself.
For example, say I want to buy Diddy's new CD (I don't). If I just want to hear the tunes at a decent quality (128kbps), I can get the 19 songs for $2.42, or if I will tolerate still lower quality and don't care to have my own copy, I can listen for free. If on the other hand I must hear every nuance, I can pay $5.21 and get 320kbps (near CD quality). That's choice, and that's what I like to see. For comparison, in terms of data per dollar, allofmp3.com is still more expensive than a CD, with the cost of Diddy's CD extrapolated to PCM bitrates ringing in at just over $26 (based on the allofmp3.com 128kbps MP3 price), twice the cost of the physical CD on Amazon. Care to do the math on a "legitimate" online music store?
I just came across an article by Paul Graham (who, among many other endeavors is a partner in Y Combinator) entitled How to Start a Startup. I was going to pull a few quotes and write an informative summary, but as it turned out I copied and pasted most of the article. So just go read it for yourself. Here's the intro to get you warmed up:
You need three things to create a successful startup: to start with good people, to make something customers actually want, and to spend as little money as possible. Most startups that fail do it because they fail at one of these. A startup that does all three will probably succeed.
And that's kind of exciting, when you think about it, because all three are doable. Hard, but doable. And since a startup that succeeds ordinarily makes its founders rich, that implies getting rich is doable too. Hard, but doable.
If there is one message I'd like to get across about startups, that's it. There is no magically difficult step that requires brilliance to solve.
Bill Moyers: The Net at Risk
Moyers on America
t r u t h o u t | Programming Note
Airdate: Wednesday, October 18, 2006, at 9:00 p.m. on PBS.
(Check local listings at http://www.pbs.org/moyers.)
"The Net at Risk" reports on what could happen if a few mega-media corporations get their way in Washington.
The future of the Internet is up for grabs. Big corporations are lobbying Washington to turn the gateway to the Web into a toll road. Yet the public knows little about what's happening behind closed doors where the future of democracy's newest forum is being decided. If a few mega media giants own the content and control the delivery of radio, television, telephone services and the Internet, they'll make a killing and citizens will pay for it. America's ability to compete in the global marketplace, the unfettered exchange of ideas online, and broadband services that could improve quality of life for millions are at stake. Some say the very future of democracy itself may hang in the balance. In "The Net at Risk," Bill Moyers and journalist Rick Karr report on the wannabe "lords of the Internet" and examine how promises by the big tel-co companies of a super-high speed Internet in return for deregulation and tax breaks have gone unfulfilled while the public has paid the price. After the documentary, Moyers leads a discussion on media reform to explore the real-world impact of deregulation on communities and citizen participation in democracy.
Microcontrollers (in this case PICs) programmed with identical code. Physicality (pin layouts, etc) may differ from node to node within certain restrictions.
The network uses the standard RS-232 serial protocol for data transfer, with the addition of a proprietary signaling system that uses the same wire to send ready-to-send, ready-to-receive, and receive-success messages. Each packet consists of the target node address, the address of the sending node, the hopcount, and a four byte payload.
Data is transfered serially using a wired connection.
In the protocol, locations are specified by an eight bit address, allowing up to 255 nodes. In the current hardware implementation, however, the nodes are addressed at boot time by hard-wiring four pins of the microcontroller, thus limiting the network to the four low-order bits of the address, or 16 nodes.
The stack for this implementation is very simple, as (using the Internet comparison) only levels up to and including the IP level of the stack are included. A TCP-like protocol could be constructed atop this stack, and then any desired application protocols further atop that protocol. As it stands, however, the network implements:
[End to end addressed packet transfer]
[RS-232 plus additional signaling]
[TTL logic I/O]
Each processor in the video is connected to a set of three LEDs, enabling the viewer to see network traffic in process.
Red -> Node is transferring data
Yellow -> Node is receiving data
Green (in combination with Red or Yellow) -> Transfer / Receive succeeded
Blinking green and yellow -> Packet arrived at ultimate destination
Each node has a local address (in this case, 0-5). Each node can then send a packet of data to any other address in the network. The transport protocol is simple (and unreliable) and consists simply of handing the packet off to a random node, attempting first to connect to any node which is not the source of the data. Thus if node 0 sends a packet to node 4, if the packet is not addressed to node 4, node 4 will attempt to pass the packet on to any node connected to it that is not node 0. If this fails, it will pass the packet back to node 0.
The video shows 5 packets being sent from the lower right node of the network to the node in the upper left corner. This shows the variable routing and a few of the possible routes a packet can take in the process.
The network was designed to be a lowest common denominator of mesh network design - a basis for possible future development. As such, it is a success. With a limited number of nodes, packets move from point A to point B in a reasonable number of hops, but it is clear that a good deal more work would be in order to develop a scalable network.
The network consists of PIC microcontrollers communicating via RS-232 serial with the addition of the ability to use that same data line for signaling from the recipient both of ready to listen and data received signals. This is achieved using the circuit shown in Fig. 1. The transmit pin for each data line is connected via a resistor to a second pin of the microcontroller which is configured as an input. Then this second pin is connected to the receive pin of the remote device. Then when a node wants to transmit, it asserts its TX pin high, which also raises TXACK and RX to high through the resistor. Then, assuming the there is a remote device connected that is not busy and sees the signal, it asserts its RX pin low, which pulls the TXACK pin on the transmitting device low as well (the resistor prevents an over-current between the two driven outputs). The transmitting device sees this ready signal, waits a specified small amount of time for the receiver to enter listening mode and then sends the data. After the data is sent a similar exchange takes place to confirm receipt.
Routing takes place via an extremely simple protocol. Each device can be connect to as many as three other devices. These connections are sensed each time a packet is sensed via the protocol described above. When a packet originates, it is sent randomly to one of the three possible connections or the first one that successfully receives it. If the packet is not at its ultimate destination, it is forwarded to another random node, with precedence given to nodes that are not the node from which it was received. This prevents two node looping. Even this one simple rule provides an enormous reduction in hop count over a purely random scheme and creates a somewhat usable, if poorly scaling, network.
As you can see in Fig. 2, for one particular simple six node topology, the probability of a packet arriving from one extreme of the network to the other in 5 hops or less (without making a loop) is 75%. In Fig. 3, two nodes are added, and while the similar case (no loop) entails only one more hop, delivery only succeeds within six hops 56% of the time.
On the other hand, in the six node example, by tolerating a single loop which allows up to 9 hops, the probability of success increases to 93.75%, which can be seen in Fig. 4. Analysis of the eight node example quickly becomes difficult because of the greater number of combinations of loops, but it seems that allowing one loop (up to around 12 hops) increases its chances of success to around 78%. Time permitting, computer simulation would allow a better analysis of larger networks of this type.
I did not have a specific numeric estimate of performance prior to construction and analysis, but the six node network performs somewhat better than I expected and the eight node network somewhat worse. This horrendous scaling could be improved with a topological shift to a small world system, where a number of these meshes could be connected together via specifically designated router nodes with the address space configured such that it can be deduced from the address whether a packet should stay within a subnet or propagate onward to another. The relative success of the mesh with small-scale networks suggests that this could be a workable solution, although the benefits over a more standard non-mesh topology are dubious given the rudimentary routing system.
The routing system itself could be improved if the size of each subnet remained relatively small by implementing a backpropagation system, wherein each packet was tagged with the address of each node through which it passed. Upon receipt of the packet, this information could be sent back to each of the nodes that were involved in the original transaction, and the hopcount of the outbound and inbound packets could be compared with any hopcounts of previous steps that were stored in memory to, over time, find an optimal next hop to reach any node in the network. This activity may need to be limited in order to balance the risk of flooding the network with optimization messages with the desire to adapt quickly to changes in topology.
After this first foray into mesh networking, I must say that I am pleased at the results of the basic implementation and would like to spend more time in the future to investigate and experiment with various learning schemes that could improve performance and scalability.
Last week, jumpWORD.com, the site I worked on for much of the summer, was featured on O'Reilly Radar:
Jumpword is a powerful set of tools for use with your phone. I really like the strong tie-in to the web. They are currently working to make the modules MySpace compliant (the script tags cause them to be blocked). They also aim to provide you with more control over the look of the modules -- a necessary improvement.
I think that most people will not grok how to use these tools or how to set them up on the Jumpword website. I think that Jumpword is going to have to build some applications with their tools or get more games to use them (or become the next MySpace fad!). They are still looking for business models, but are currently considering offering the service to corporations in need of a mobile toolkit for events.
Read the full post on radar.oreilly.com
In light of the recent blogspot ban in India, the blogging community in Pakistan would like to present as a gift to the Indian blogging community a small script that can be inserted into their websites which converts all Blogspot links into a URL utilizing the proxy servers of pkblogs.com
Download the ZIP file Pkblogs Script.zip
Please consider this as a gift from Pakistan to all Indians in hope of building friends across the border
God bless the hackers.
Another highlight of the talk was the conclusion when Red asked Ethan what we might do to help the world, what issues we might take on, and he responded:
You've got to find something you're passionate about, but passionate in a way that scares you. If it doesn't scare you, you haven't found the right thing.
The chapter, The Uses of Sidewalks: Safety, from the book The Death and Life of Great American Cities written by Jane Jacobs and published in 1961, is profoundly relevant today in its take on urban safety as a function of emergent effects of sidewalk as network. The closing paragraph of the chapter reads:
On Hudson Street, the same as in the North End of Boston or in any other animated neighborhoods of great cities, we are not innately more competent at keeping the sidewalks safe than are the people who try to live off the hostile truce of Turf in a blind-eyed cty. We are the lucky possessors of a city order that makes it relatively simple to keep the peace because there are plenty of eyes on the street. But there is nothing simple about that order itself, or the bewildering number of components that go into it. Most of those components are specialized in one way or another. They unite in their joint effect upon the sidewalk, which is not specialized in the least. That is its strength.
In example after example, Jacobs illustrates the beauty of the ordered city, not one held siege by heavy policing, but instead one that primarily polices itself. This, over 40 years later, is exemplified by the New York that I am lucky enough to inhabit today.
What is most significant here, is that the underlying network of all of this street level behavior, the sidewalk, is not specialized. Within reason, all types of people and behavior are tolerated and appreciated. The successful city realizes that with a few notable (and usually violent) exceptions, activity trumps inactivity, is beneficial for the network as a whole, and should be treated with equal respect, regardless of traditional moralist standards of "acceptable" behavior.
This activity then bolsters the strength of the network. It is a beautifully elegant natural solution, BitTorrent meets SneakerNet. Activity begets activity, and also provides the resources and safety necessary for this added activity. In this case, it is not bandwidth that is at issue, but instead eyes to monitor the safety of a city's streets.
The million dollar question remains, however — how to bolster this type of activity in a place where it currently does not exist by nature. Jacobs lists a number of failed attempts, typically very harsh segregation that is paralleled to the Turf system used by street gangs, but I have yet to find a clear answer in the positive as to how to all at once revitalize a neighborhood without running into the problems presented by artifically imposing culture upon a neighborhood.
What is clear, however, is the notion of sidewalk as network; specifically a network that benefits and flourishes under what may appear to some to be the burden of added activity. I think this is second nature to most of us that have lived in or visited thriving urban environments as well as their depressed counterparts. The phenomenon of safety in numbers seems second nature, but still some planners and developers must only feel safe when they are alone.
There are strong parallels here to the current debate over net neutrality, where as a knee jerk reaction to a few "internets" that were slow to arrive, small-minded regulators come to the quick "realization" that artificially imposed sanctions on usage patterns and their priorities are the answer, ignorant of the fact that this ruins the elegant simplicity of the network itself, much as some people feel that a bustling nightlife in their neighborhood will inevitably bring crime, when generally the opposite is actually true.
In the case of net neutrality, how hacker-proof will these prioritizing systems be? I have yet to see a non-trivial example of rights management or copy protection that a handful of well motivated college students were unable to crack. And then where are we? Not only do we have tubes half filled with trash, but the smelliest of the garbage, that of the criminals that have learned to defeat the system, has priority over the rest of our content. It's much the same argument as is often made against gun control, but I think precedent shows that these laws would be even more difficult to enforce.
If you don't believe me, go search Torrentz for your favorite high-priced software package. Almost invariably, its developers have spent countless dollars on implementing various copy protection strategies only for them to be almost immediately broken by "crackers" across the globe (don't steal software.). I think I hardly need to suggest what is likely to happen if means are opened to prioritize traffic on the internet. The fact is, I don't know, but I am willing to bet that it isn't the simplistic, utopian result that the regulators envision.
The broadest lesson to be taken from all of this is that whenever networks are examined beyond the most trivial of examples, whether in the technological or the human arena, things are rarely as simple and straightforward as they appear. Often precisely the opposite is true, and even more often, simplicity of policy is beauty in practice.
Today marks the beginning of the jumpWORD developer blog, a site for updates and discussion about all the services we are building at jumpWORD.com. Bookmark it or subscribe to the Atom feed to keep up to date with the rapid development of the newest platform for distributing content targeted at and coming from mobile devices.
I just finished a perl script, based on a script by Adriaan Tijsseling, that grabs from the Recently Played feed (mine was empty) to create a cloud of artists from last.FM's XML feed of most played artists. You can see the script in action at the bottom of the right hand sidebar of this page.
You can get the script here. You will have to rename the file with a .pl extension, change 'relevante' (my username) to your lastFM username, change the user agent on line 3, set your domain on line 4, and set the file path on line 5 to point to an appropriate directory on your server. Then run the script on a cronjob and include the output file in your blog or webpage (if you are using PHP, you can use <?php include("/path/to/thefile.txt"); ?>). Then everyone can see all the crappy music you REALLY listen to.
A few days ago I switched my PC to Ubuntu Linux. It's a slick distro - very easy to install and pretty out of the box. After a little poking around in the xorg.conf file, I was able to get a Cinema Display and an old Dell CRT working together in a nice dual monitor setup with a dual-head ATI Radeon 8500 graphics card. Here's the xorg.conf.
I also found a nice page about theming Ubuntu to look like Mac OSX. Pretty.
Finally, I wanted to set the box up as an AFP server so I could easily access fileshares from my MacBook. There's an open source project called Netatalk that does just that. However the binary version that installs using Ubuntu's apt-get lacks the authentication mechanism required to allow logins (it worked but only with guest access). Here's a thread from the Ubuntu forums that details how to download and compile the correct version from source code.
Welcome. I have started this site for myself and others as a memoir of my trip through life and, for now, the Interactive Telecommunications Program experience and as a place to log the ideas and thoughts that otherwise seem to slip away. I look forward to comments and criticisms, helping and being helped, and whatever else comes my way. Life is good.