Wednesday, March 25, 2015

Interactive Dimensioning of Parametric Models

Here's my paper that'll be presented Eurographics this year. It's a screen space technique for positioning control handles on parametric models. [pdf | doi]

We propose a solution for the dimensioning of parametric and procedural models. Dimensioning has long been a staple of technical drawings, and we present the first solution for interactive dimensioning: a dimension line positioning system that adapts to the view direction, given behavioral properties. After proposing a set of design principles for interactive dimensioning, we describe our solution consisting of the following major components. First, we describe how an author can specify the desired interactive behavior of a dimension line. Second, we propose a novel algorithm to place dimension lines at interactive speeds. Third, we introduce multiple extensions, including chained dimension lines, controls for different parameter types (e.g. discrete choices, angles), and the use of dimension lines for interactive editing. Our results show the use of dimension lines in an interactive parametric modeling environment for architectural, botanical, and mechanical models.

Thursday, November 13, 2014

Eurographics is a European graphics conference! The review emails never make it clear what the scoring scheme is (it seems to be only published to reviewers). Therefore the ranges (for 2015) follow...

Overall Recommendation Score:

0 - 1 - very poor
1 - 2 - poor
2 - 3 - clearly below EG standard
3 - 4 - dubious - not quite acceptable
4 - 5 - marginal - only just acceptable
5 - 6 - acceptable
6 - 7 - good
7 - 8 - very good
8 - 9 - excellent

"An average of at least six is required for acceptance, but might not quite be enough."


0 - 1 - Very unconfident, really just a guess
1 - 2 - Rather unconfident, but I know a bit
2 - 3 - Moderately confident, I know as much as most
3 - 4 - Pretty confident, I know this area well
4 - 5 - Extremely confident, I consider myself an expert

Wednesday, February 19, 2014

Unwritten Procedural Modeling with the Straight Skeleton

...and a year later we have the every-so-slightly improved final thesis: (PDF) (latex source to follow)

Sunday, February 16, 2014

art from the code trenches

 One of the up-sides of programming graphics is that our bugs are art...

Sunday, May 05, 2013

Unwritten Procedural Modeling with Skeletons

Update: finished thesis post available here or here.

The first complete draft of my thesis is now online (87Mb). This hasn't been submitted or viva'd yet, but may still be interesting to people. It's mostly a collection of posts from this blog, and the two papers I published about the straight skeleton.

I'm sure there's plenty of mistakes in it - let me know! (Update: don't let me know, have finally submitted!)

Monday, April 22, 2013

Vines are 5.5 seconds too long

There's a problem that's been bothering me for a while: how do we let users create consumable home video. If you've ever tried to use a video editing package, you'll probably have:
  • given up because the editor was hard to use (have you seen the edits people choose with Vine?...), or
  • given up because the editor couldn't import your footage, or
  • given up because the editor kept crashing, or
  • spent 10x the length of your final video, getting the clips lined up just right, or
  • had the result be entirely unwatchable to people who don't know your friend, Fred.
I went to Thailand, and being a geek, took 3 cameras and came back with way too much mediocre footage, ~30Gb, or 3 hours or so. Trying to edit it all to anything my friends would actually want to watch (or, fantastically, recommend someone else watch), would have taken a long time, and was probably beyond my skill level and hardware. My solution was to pick 0.5 second long section clips from each video. I was really quite pleased with the result:

This is actually a quite interesting 4 minute video, as far as holiday videos go. Possibly about 3 minutes too long, but pretty succinct.

The thing that really struck me was that the process of selecting the clip from the (sometimes quite long) clip was almost trivial, to the point where an algorithm could make pretty good guesses.
  • the first 0.5 seconds of a clip is a good default
  • ignore lots of frames of video that are the same (when I leave the lens cap on).
  • when not-much moves, and then something moves, that's what's interesting
  • however, if the view wobbles for 1-2 seconds at the start before going steady, that's me positioning the camera and you want the bit after.
  • changes in volume can be interesting.
  • blurry things generally aren't interesting
So then I started considering whether this would be a viable company. Let's call something like the above video a strobee (despite the fact the url was taken long ago), and each individual clip a bee.

So now we imagine a world in which everyone is uploading bees, from their phones, shiny new pairs of Google glasses (or do you wear a google glass?), cameras, etc... We pretty quickly come to the conclusion that strobees can, and should, be assembled on the fly from a large database. That is we could steam a strobee (endless video stream) to user based on:
  • users (user channels)
  • your current location (a stream that changes as you drive down the street!)
  • a particular location
  • most recent in time
  • a certain hashtag (#bobs_wedding, #election2015)
  • popularity (how do we judge popularity - votes? interaction with a stobee?)
  • colours (show me videos that are mostly red)
...there are enough possible use cases to warrant an expansive api.

There have to be a wide range of algorithms we can apply to public video feeds (nicely complying with the "substantiality of the portion" in american copyright law), to extract interesting bees. Strobee libraries might come from:
  • webcams
  • movies (imagine a 3rd party service that delivered a bee containing your chosen word from a random movie)
  • old fashioned TV streams
  • satellite images
Revenue seems to be a much simpler sell than twitter or facebook. We can limit ourselves to only showing 0.5 seconds of an advert every so often, targeted to the user, the search, or the location. Given that people will put up for anything for 0.5 seconds, addbees shouldn't be too much of a disincentive  There's always the option of paying to remove adverts. Since the adverts are part of the stream, we could let people embed strobees into other websites, or request a stream via an api without issue.

  • This blog post basically describes the Vine ecosystem, but with a lower maximum clip length. It would be trivial for Vine to compete. Then again, twitter competes successfully with email.
  • There's some deep technical work to do on compressing such short clips, the setup of the I-frames in certain short clips is problematic.
  • Can we compose a stream from a such a giant database in real time?
  • How do people give feedback on blink-and-you-miss-it bees?
  • We would want to disseminate everyone's clips, and show adverts along side them. Perhaps we don't show adverts to people who create popular bees? Should we ask (or set default to) a creative commons license for all bees?
  • If it were ever popular, people would use it for p0rn. How do we filter such short content? The ever-racist 70% pink/frame criteria? 
  • How does someone wearing google glass upload a strobee? We could take the 0.5 seconds before someone says "strobee"? (Until it became popular enough, then you would have people shouting "strobee" at you if you wore your google glasses into town).
  • A host of privacy/missing context lawsuits are likely...

Sunday, April 01, 2012

eurographics fast forward video

For a bit of light relief, here's the fast forward video for the Procedural Parcels paper that Carlos is presenting at Eurographics this month.