obj_streamgraph-generator: iOS Streamgraph Library

February 7th, 2012 § 0 comments § permalink

Recently, I’ve been playing around with Streamgraphs aka stacked graphs aka stacked area charts. One of the first examples for this type of chart with a bigger traction was the New York Times’ The Ebb and Flow of Movies that showed box office receipts in a very organic and fluid form (which of course makes it also hard to read – but it’s enough to get a rough idea of what’s going on).

In the scientific realm, Lee Byron and Martin Wattenberg discussed Streamgraphs in detail in their excellent 2008 paper Stacked Graphs – Geometry & Aesthetics. Lee Byron also provided an open-source version of the Streamgraph code that he had used for the paper.

Based on that code I’ve created an Objective-C version of the Streamgraph code, geared towards iOS (i.e., iPad, iPhone). You can find it in this github repository. The main work went into changing the Processing/Java-code to Obj-C (not having typed arrays is a pain), and I also produced a demo app that displays Streamgraphs on iPads (uncomment lines in the viewDidLoad-method of the ViewController for other layouts/colors). Here are some examples:
Streamgraph Layout
Stacked Layout
Themeriver Layout

Now, take that code and build something beautiful with it. And let me know if you find bugs :)

So what is this Virtual Projection and when can I buy it?

January 24th, 2012 § 7 comments § permalink

The last days have been somewhat crazy as we saw a lot of interest in Virtual Projection (VP), a project by Steve Feiner, Sebastian Boring and myself that will be published at this year’s CHI in Austin. First Golem.de, then Engadget and finally The Verge all picked up the story and the corresponding video. I want to use this blog post to provide a little more information on the project.

In case you haven’t seen the video, take three minutes of your time and watch it:

When we started with Virtual Projection the initial idea was just to artificially replicate the workings of an optical projector. Just pointing at a wall and pressing a button to make an enlarged image appear is a very easy and powerful concept. To transfer information to a wall display is comparably difficult (fiddling with cables, struggling with software issues), so we wanted to make that just as quick and painless. But once we had reached that goal it became clear that while we’re simulating the whole thing anyway we can even improve on that metaphor. While Virtual Projection has the clear downside that it requires a suitable display and does not work on any regular surface, we can at least “fix” some of the downsides of its real-world model.


One of the first things was getting rid of distortion: When projectors are aimed towards a wall at an angle, keystone distortion can arise, warping the resulting image. Virtual projections can be freely adjusted when it comes to distortion and transformations of the resulting image: It’s possible to either (a) fully recreate a projector’s image, (b) remove at least the distortion, (c) ignore the orientation of the handheld (e.g., to have a photo always upright), (d) ignore both orientation and scaling (e.g., to have the photo in its original size), (e) ignore all transformations and just use the technique for selecting a display that’s used fullscreen (e.g., to show a video or a presentation).


In addition to this control over transformation and distortion, virtual projections can also be “decoupled” from the handheld. While an optical projection is always fixed to its light source, we can fix virtual projections to displays and also create more of them at the same time. The above image shows a typical VP workflow: We first (a) start an application (for that, we implemented something very similar to Apple’s Springboard for the iPhone) and (b) interact with it. Once we (c) point at a display a preview of the VP appears. By (d) long-pressing we can control the VP’s placement on the display and (e) fix it by lifting the finger. Both the handheld view and the VP are now synchronized, so (f) interacting on the handheld changes the VP (in this example: the currently highlighted section of the photo. Note that the VP shows the whole photo, while the handheld is only able to display a part of it). By pointing at the VP, we can (g) select a different part of the photo to be shown on the handheld by tapping. Once we’re done, we can (h) remove the VP by long-pressing again and dragging it off the display. In case you’re wondering where the transformations/distortions went: They are predefined for each application, so in this photo-viewer example we take everything except the distortion (so type (b) from above).

This also works with multiple VPs: By pointing at an inactive VP on the display and tapping, the respective view immediately becomes active and visible on the handheld (the VP before that still stays on the display and continues running in the background on the handheld). Another option is to shake the handheld to bring up the menu (see (a)) and switch to a background app or start a new one. It’s no coincidence that starting and working with apps in VP is similar to the regular application management on a smartphone: We wanted to show that the VP interaction technique can easily be included into existing smartphone operating systems. Of course, due to security restrictions we weren’t able to completely integrate it (shaking the device, for example, was our replacement for pressing the Home-button on the iPhone). However, our VP implementation works on a regular iPhone, not even jailbroken, and without any private libraries.


With this interaction and tracking framework we built several applications. While the video has much more, here are three interesting ones. (a) is the example from above, so a photo-viewer that shows whole photos and parts of them on the handheld. It’s possible to quickly switch the visible part by pointing and tapping. The handheld’s perspective frustum determines which part that is, but it’s always possible to see that from the preview on the display (grey-ish border). (b) shows an image filter that can be applied to photos (in this example greyscale). Image filters work like regular virtual projections, only they show nothing when on their own. But placed on another VP that displays a photo they filter it. It’s also possible to combine multiple of them by stacking them on top of each other. (c) finally demonstrates multiple maps next to each other. Map viewers work similar to photo viewers in that they show a larger section of the map than possible on the handheld. In this example, the handheld also works as a “magic lens”: It shows a satellite image for the current section, while the stationary display shows the road map. By moving the handheld in front of the display that image changes correspondingly in real-time.

To sum up: In Virtual Projection we did interesting things with simulated projections and tried to keep it as close to a real-world scenario as possible. Our prototype works with regular, unmodified iPhones, the corresponding server runs on a regular Windows-PC (for the video we used an i7 machine) and everything happens via Wifi (so no cables needed). Imagine having a VP server running on every display that you encounter in your daily life and being able to “borrow” the display space for a while (e.g., to look something up on a map). Give it a few more years (and a friendly industry consortium ;) ) and this could become reality.

Birds-of-a-feather viz bloggers meeting at Visweek’11

October 26th, 2011 § 2 comments § permalink

Visweek, the arguably most important conference for scientific approaches to visualization is happening right now and provides a venue for various discussions. The birds-of-a-feather (BoF) meetings, for example, allow small groups to informally convene about a certain topic. I just came from the visualization bloggers meeting there and took notes (and unfortunately skipped most of the discussion because of that), but will summarize the most important points here (ask me if you want the full transcript).

The audience had a highly mixed background, some people from infovis, but also vis, and companies. Some of the bigger names were Enrico Bertini (Fell In Love With Data), who had also organized the meeting, and Robert Kosara (eagereyes). It was very interesting hearing about their opinions on and experiences with blogging.

After the introductory round a relaxed discussion about various topics started. I will summarize the most interesting of their points.
A major topic was your actual motivation for getting into blogging about visualization. Dr. T.J. said that it’s important to communicate scientific ideas to the public and if you, the scientist, won’t do it, then somebody else will do it and likely get it wrong. It was also easy to reach a certain visibility via Google, as there were only a small number of people blogging about these topics. Enrico said that blogging helped him get a much deeper understanding of the projects he was discussing. Jan Aerts added that initially he just needed a place to have all his thoughts in a row – not necessarily with an audience in mind. Robert’s motivation was the same as for being in academia: Doing stuff and getting it out there. Of course, one also should not forget about the increase in visibility that one gains as the author of a good blog and the possibly valuable feedback from the readers, even if it’s just a single person declaring their mind blown by a post. Also, high-frequency blogs with a more general audience can be disappointing regarding feedback.

Audience was another larger topic. The group was a bit torn on that as the active bloggers declared the general population to be their target readership, while some people were thinking more about directing their work towards other visualization researchers or even writing just for the sake of it. Robert mentioned that a lot of “regular” people were geeking out about visualization and he was surprised when linking to his conference paper on the blog and that actually receiving a large number of downloads. Also, sometimes he was asked for data sets from seemingly non-visualization people. In any case, switching to English as blogging language can help in reaching the global community, even though blogging in one’s native tongue allows (possibly) more nuanced expression and reaching non-English-speaking parts of the population.

The blogging process itself was discussed as well. Enrico said that everybody would agree that blogging itself is painful, even though it enables constant experimentation regarding narrative (interviews, videos) and other aspects. However, one just never knows what will work (Robert’s ZIPScribble Map is one of his most popular posts) and waiting for the spike in page views can turn tragic when an experiment fails. Regarding frequency, posting once per week appears to be the best trade-off between necessary work and keeping your readers happy (even though skipping blogging for a couple of months is no problem – it’s more about high quality posts than frequency). Also, seemingly, search engine rankings become higher when always posting on the same weekday. To come up with possible topics, people put interesting projects in their bookmarks folder and keep notes and drafts, even on the risk that those are never getting anywhere.

The debate turned a bit heated while discussing criticism. Especially one person active in “visualization” who-must-not-be-named was a popular target for frowns – deservedly. However, people where unclear where his popularity came from or if it was just bad luck for actual visualization. In any case regarding criticism, even though blogs are public they do not have to be sugarcoated – criticism is just as valid online. There was some discussion, however, on how agreeably this criticism should be presented (flamewars between blogs also could become interesting sources for learning and argumentation). Concluding, everybody agreed that “serious” researchers can uphold the blogging flame for actual visualization.

One last thing that was also helpful were recommendations on how to promote your own blog. Twitter seems to be most direct thing, but other suggestions were posting it on Hacker News, commenting on somebody else’s blog or basically every other way that one could think about to get the URL out (putting it on business cards and distributing them at conferences, putting it in your email signature and on the work website). However, one has to be in it for the long run.

Overall, the discussion was very interesting and gave various very good reasons to get and stay blogging.

HowTo: Remote control iOS Spotify from your couch

August 29th, 2011 § 0 comments § permalink

At the moment I like to do my working/coding from the couch, but listen to music via my iPod Touch that’s running the Spotify app and is connected to the stereo. Unfortunately, the iTouch is conveniently out of reach, so if I want to change the song or playlist that’s running I’m out of luck (or rather: have to get up and walk all the way to the stereo to change it). Of course, there are dedicated (aka pricey) solutions by Sonos with remote controls to provide this functionality, but I’m happy with my iPod. So what do?

Spotify’s constant synchronization to the rescue: It’s trivially possible to remote control a running Spotify app using a playlist.

  1. First, I launch Spotify on both the iPod Touch and my laptop.
  2. I then create a new playlist (in this example called my_remote_playlist), add one or more songs and start the playback on the iPod.
  3. The contents of the playlist are constantly synchronized, so if I now add songs using Spotify running on the laptop, all changes are reflected in the iPod version of the playlist. Once the iPod is done playing one song it takes the next one from the latest version of the playlist (instead of the original one).

This provides me with a play queue where I can drop new songs at will and have them played immediately. Of course, this is just a workaround: no skipping of songs or stopping playback and I always have to keep the playlist filled with songs otherwise it runs out and I have to get up to press play again.
But the nice thing is that it works across all devices running Spotify (I also tried the Android version and a desktop PC).
And if you want to know what song you just heard, check out your listening history on last.fm – don’t you just love these hacks?

The battle for Spotify dominance

May 30th, 2011 § 1 comment § permalink

I have a Spotify premium account which allows accessing their whole music catalogue with a smartphone, but restricts this use to only one device (so either desktop or mobile). If I try to listen to music on my iPod touch and someone presses play in the desktop client I get this, not very subtle, warning:

Similarly, the desktop client stops the music and shows this when the mobile client tries to play a song:

The interesting thing is that when a client tries to play a song and the account is already in use it succeeds and blocks the other client in the process.

While actually more of a nuisance, this interplay of the Spotify clients has an interesting side-effect for my girlfriend and me: When she’s at home listening to music and I switch on Spotify’s mobile version we often delve into clicking battles for Spotify dominance: Whoever keeps on pressing play the longest gets to listen to music. We have various strategies (e.g., letting the other one listen for a minute or so, giving him/her a false sense of victory, then cutting off the music). Sometimes we also act in this dilemma like good little prisoners and cooperate. I’m not sure what the intention behind this unusual behavior of the clients was, but it allows this very strange form of non-verbal communication and social snacking.

Creating a need for personal data

May 18th, 2011 § 0 comments § permalink

The recent scandal about apple’s iOS location data collection has caused – not surprisingly – a huge uproar. Yet, some people (myself included) found themselves even jealous of iOS people that had this amazingly detailed data set of their own movements and could play around with and visualize it. So, where does this completely different outlook on personal data come from and what implications does it have for society?

First, I have to admit to having a somewhat bleak outlook on the future of personal data: Privacy and self-restrictions to collecting only as little data as necessary are wonderful ideas, but simply no longer enforceable. We’re moving towards a future where extensive data collection using all kinds of sensors and devices is commonplace. Reasons for that are that it’s a regulatory nightmare to keep companies from storing this information and enforcing their deleting it. This becomes worse when every single device collects data instead of large websites and services only. Also, if you argue that this data shouldn’t be collected in the first place, it is often necessary to provide interesting features and services. Finally, a lot of people simply don’t care – if you don’t believe me just look at the stuff they post on Facebook.

The point where this development could have been stopped is long gone, so we just have to live with this permanent data collection that will become rather worse than better. But we still have two brave new worlds this could lead to: Number one is the Orwellian one (with a dash of Kafka) where faceless governments and corporations hold your data and know more about yourself than you do. Your access to it is heavily restricted, if you have one at all. You might not even be aware of any type of data collection going on.
World #2 still has these huge amounts of personal data but they’re the people’s. They belong to the producers and not the collectors and each person has free access to and control over their own data. They created them after all, so they’re their intellectual property. Companies might want to use them but have to license them first.

I think that the development towards permanent data collection cannot be stopped, but we should at least strive to reach a future that’s more like world #2 than #1. I think that the reactions to the iOS scandal give us an idea how to go about that. So, why were most people shocked and some giddy about this whole thing?

People care deeply about their data, which is a good sign and not a given in a world where seemingly every bowel movement is documented on Facebook. But this care is in a very abstract sense: People want this involuntary data collection to stop and want to see the data deleted once they see something about it on TV. Most people don’t care about having access to it, they just want to keep it from happening (which might be hard as we’ve seen). Interestingly enough, they often don’t mind this collecting when they get something from it: Some companies add another service layer to this data to recommend items (Amazon) or build playlists (Apple Genius) which suddenly makes permanent tracking OK. Also, access to the ‘raw’ data is then no longer necessary.
Yet, this demand for raw data is the only thing that is keeping us from a future where all personal data is in the hands of the state or private companies. It is therefore necessary that a lot of people demand access to that, not only a couple of geeks (sorry), as these stakeholders will only listen to large demand (and maybe even make a business out of it). The main problem is that regular people are usually not tech-savvy enough to do something with data which is why they don’t care for it and which is completely understandable: Imagine having access to the most expensive sports car but no idea how to drive it. The techie crowd, however, became quite excited about the iOS data and even started collecting these location histories when the latest Apple patch deleted them.

Therefore (and here’s my call to arms): We as researchers and practitioners have to create this demand and need in people for their personal data. We have to give them the tools to actually do something interesting with their data and be able to do that without learning to code. Visualizations for personal data and the various lifelogging services are first steps into this direction. And even Facebook lets you by now download your complete profile.
But I want more: I want to download my reading history with timestamps when I turned the virtual page from my Kindle, I want to see how much time I spent at every single webpage I visited (and where these weird recommendations come from, Amazon), I want to know why my credit card company thinks i wouldn’t buy this or that and, yes, I want my smartphone to collect information on my whereabouts and I want it to give it to me!

All of that will only happen if there’s enough demand for raw data. So keep on building these amazing visualizations and tools and give them to the people. Make them as convenient to use as a TV and as beautiful as a coffee-table book. Let people learn about themselves and use this data for fascinating new forms of communication. And make sure that the only future silo that keeps your most personal information belongs to you and no one else.

tl;dr: create demand for raw personal data by giving people neat tools for it.

Music Hackday: Flowlist

February 15th, 2011 § 0 comments § permalink

I had a great time this weekend at New York’s very own Music Hackday. An impressive number of people spent 24 hours churning out all kinds of fascinating music applications. And that was well possible thanks to all the great APIs available for this type of data – and of course also the great environment at General Assembly with foods and drinks provided by the sponsors.

My hack is called flowlist and it is an interactive web-based playlist generator. The basic idea is taken from my paper “Rush: Repeated Recommendations on Mobile Devices” (last year at IUI, PDF, Video). In a nutshell: The listener selects an initial seed song, gets a set of five similar items, select one of these, gets a set of five similar items again and so on.
Rush 1

Rush: Repeated Recommendations on Mobile Devices

By doing that, you can quickly create a somewhat personalized playlist which gives you some advantage over fully automatic playlist generators (that allow only so much customization) and, of course, fully manual playlist building (just think of your 60 GB of music..). Rush also had a fairly unique way of navigating the results: Once you pressed on the touchscreen, the virtual canvas containing the songs started flowing into the opposite direction and songs were selected by simply touching them (the Video makes it clearer).

So, flowlist (try it in your browser right here or watch the video) works similarly: You first enter a search term and are then presented with the top three results from echonest.
flowlist intro

flowlist gives you echonest’s top results for your query

Based on this “seed song”, flowlist uses echonest’s excellent API to determine similar songs on the fly. These songs are then shown towards the right and by pressing the mouse button the virtual canvas again starts flowing and lets you select songs simply by crossing them with your cursor.
flowlist - flow

Based on your last selection, flowlist shows similar songs

Once you think that the playlist is big enough, you can export the playlist by hovering over it which presents you with options for exporting it: Right now, there’s plain text available but flowlist also let’s you export the playlist to Spotify via Playlistify. That was also the part that took me the most time – an easy API for playlist exporting is still lacking from the whole music API circus. And, of course, having a streaming service that gives you music based on URIs and lets you play them in the browser app would have been even better – but that’s a different story.

Flowlist is written mostly in processing and ported to the browser with processing.js. I also used some jQuery for accessing the various APIs, but flowlist is running right in the browser.

All in all: Wonderful time at Music Hackday, lots of fun and great people. Can’t wait for the next one!

UPDATE: Flowlist now has support for Grooveshark widgets. So, create a playlist and click on the little Grooveshark icon in the lower right of the screen to create a widget. Be warned, though, the support is still a bit buggy – if something goes wrong, just try again.

Map-based music visualizations considered harmful

February 5th, 2011 § 0 comments § permalink

In order to make sense of huge collections of music the first idea that comes to people’s minds is often: Display all of these songs by using some minimalistic graphical abstraction (so they can fit on the screen). This approach is not only commonly taken in research but is by now also available in commercial products and apps where maps are commonly used as a way to one-up the boring old lists.

Last.fm Artist Map tuneglue sonarflow

Oooh, so pretty!

Just to be clear: I’m not against music maps per se – they’re great for discovery. The recent Aweditorium for iPad, for example, lets you find new indie music in a playful way and is really, really neat. The problems start, however, once you decide to use a map for anything beyond discovery: Organizing music collections, building playlists and so on.

My first problem with this approach is how music is represented in one’s mind: Maps are often used for visualizing personal music collections that, as the name implies, also contain all kinds of invisible personal stories stuck to the songs. While James Yuill and The Airborne Toxic Event might not be very similar music-wise, they have a certain connection in my mind as I kept thrashing their albums in the same summer. However, the layout of such a map is usually based on similarity, either content- or metadata-wise, and these algorithms can’t reflect that. Using a personal listening history as basis for the map could, for example, work much better (full disclosure: LastHistory was developed at our research group. Here’s the research paper (PDF)).

The second problem is that of visual representation of songs. Finding a specific artist or even title on a map is often close to impossible as there are mostly no clues for navigation (musical genre and sometimes even similarity are vague and often based on personal taste). So the only option for finding something is visually scanning the whole map. And depending on the representation of songs that can take a while. This scanning becomes more and more difficult the more abstract the representation and the more interaction is required: While it’s possible to find a song when all titles are shown with their album cover art or a photo of the artist, locating a song in a field of colored circles requires much more effort and finding it on a musical “terrain” is practically impossible. Aside from the visual representation, interactivity can also hinder this effort: requiring people to tap on circles to learn about their true nature (a.k.a. title and artist) will ensure that you only keep the most obsessive-compulsive part of your audience.

Mufin Player Treemap Personal Collection Treemap Personal Collection
Quick! Find Madonna!

Such a search is finally made even more frustrating when the app has no capabilities for searching or filtering. While text-based direct search might not be the perfect way to look for music (Query by humming comes to mind) it’s better than visually scanning the whole map. Having a way to filter this information by user-defined criteria (genre, lyrics, tags, release date and the rest of all that meta-data) is even better to let people find what they are looking for.

The most important rule (which is true for all map-based representations): If you absolutely have to use this abstraction MAKE SURE THAT THE MAP DOES NOT CHANGE! Humans are very good at remembering spatial relationships (you could probably tell me without looking how all these things in your apartment are arranged. See also Roman Room memorization technique) and user interfaces that tap into this ability should also support it. Once the map changes all of these intricately learned relationships are useless. Point in case: Aweditorium. I remembered hearing interesting Danish experimental pop there (it was Oh No Ono by the way) which was somewhere in the upper right corner of the map. But of course, once I restarted the app the whole map got reshuffled and I couldn’t find it. Why have a map if the spatial relationships are meaningless anyway? I have similar gripes with the actually pretty great Kindle app for the iPad, that re-calculates the layout of the text – no, not only every time the font size is changed – but every time the app is restarted! Again, remembering that some interesting tidbit of text is on the lower part of a page isn’t any use.

To sum up, here’s the list of map rules in short:

  • Use easily memorizable/recognizable representations for songs. Cover art or artist photos work well. Try to think of visual landmarks to support visual search.
  • Provide a text-based search for specific titles/artists and filters for genres.
  • Never ever change the layout of the map! If you have to change it to add or remove songs try to keep it as close to the original as possible and inform the user about it using animations.

HowTo: Processing.js in iOS

February 5th, 2011 § 3 comments § permalink

Processing.js is a great Javascript-port of the processing language that has reached the version 1.0 milestone a few months ago and allows the easy creation of visually-appealing interfaces. Plus, being based on Javascript and HTML5 it promises the mythical ‘Write once, run anywhere’, even on mobile devices that don’t support Java.

For a recent multi-platform visualization project I wanted to use processing.js so I wouldn’t have to write everything twice. Unfortunately, however, getting processing.js to run on the iDevices doesn’t seem to be that easy. Projects like iProcessing run processing code on the iPhone, but only there (the regular desktop webbrowser version needs additional work). There also exist dedicated rewrites of processing.js that bring with them the disadvantage of every fork: For every update of processing.js, the wrapper also needs one.

Fortunately, it is pretty easy to use any version of processing.js on iOS simply by using a suitable HTML skeleton. Necessary steps are:

  1. Transform touch events to mouse events
  2. Keep MobileSafari from scrolling/zooming

First, we have to let the processing-canvas catch all touch events, so we can call the corresponding processing-functions. We do that simply by adding the corresponding event handlers to it:

<canvas
ontouchstart="touchStart(event);"
ontouchmove="touchMove(event);"
ontouchend="touchEnd(event);"
ontouchcancel="touchCancel(event);"
id="sketch" data-processing-sources="sketch/sketch.pde" width="320" height="480" autofocus></canvas>

Then, we have to write some javascript that converts the touch coordinates to processing’s internal mouse coordinates. We also want to call here the corresponding mouse-function (mousePressed for touchStart and so on). Finally, we also have to keep MobileSafari from using the touch events for scrolling, which is done by calling event.preventDefault() for each event:

<script type="text/javascript">

var processingInstance;

function setProcessingMouse(event){
if (!processingInstance) {
processingInstance = Processing.getInstanceById('sketch');
}

var x = event.touches[0].pageX;
var y = event.touches[0].pageY;

processingInstance.mouseX = x;
processingInstance.mouseY = y;
};

function touchStart(event) {
event.preventDefault();
setProcessingMouse(event);
processingInstance.mousePressed();
};

function touchMove(event) {
event.preventDefault();
setProcessingMouse(event);
processingInstance.mouseDragged();
};

function touchEnd(event) {
event.preventDefault();
setProcessingMouse(event);
processingInstance.mouseReleased();
};

function touchCancel(event) {
event.preventDefault();
setProcessingMouse(event);
processingInstance.mouseReleased();
};

</script>

Here is the complete HTML-skeleton. Make sure that you have processing-1.0.0.min.js in the same directory (you can download it from processingjs.org) and a processing-sketch in ‘sketch/sketch.pde’.

Some caveats: Multi-touch is not supported (all touches beyond the first are ignored) and advanced processing features such as video or 3D probably don’t work. But you can use this barebone-HTML to get your processing-sketches on the web – on all devices.

SubversiveBox: Your own Dropbox for Subversion

January 19th, 2011 § 0 comments § permalink

Dropbox is a great service for using a file repository without the usual hassle of committing and updating (and actually knowing far too much about computers) and they even give you 2GB for free. But understandably, more space costs (the smallest upgrade is currently 10$ for 50GB).

I’m not in the mood for paying when my university gives me a (practically) unlimited SVN-repository for free. As manually committing changes is inconvenient, I decided to write a script that would do that for me.
Some similar approaches that I found while looking for something like it:RubyDrop is a Dropbox-Clone based on Ruby and git (so no SVN), and here’s a similar project for Linux (just syncing, no repository).

The result is SubversiveBox, a small console application that keeps track of your working copy and immediately submits changes. You can download a compiled version here. It needs the SVN command-line tools to work.

It’s very basic, but also very simple to use: Just launch it with

SubversiveBox path username [password]

and it will do the rest. Path is the path to your working copy, username and password are the credentials for your repository. Be careful though, it doesn’t support folder renaming and is still in a very early stage: it might easily get confused and scared and stop working (doing backups of your important files would probably be a good idea – as it always is). Also: It’s open-source, so you can easily fix bugs yourself (yay)!

Have fun!

  • noise

    Error: Twitter did not respond. Please wait a few minutes and refresh this page.