Moing is for Animation
The design and implementation of an Open Source animation tool.
June 12, 2009
Github Migration
A small piece of news: I took a couple minutes today to migrate our existing darcs repository to git, and put it up on github.
May 16, 2009
Moing in 46 Words
Moing is software for creating motion graphics. The basic principle is that you place graphics (bitmap, vector, video, or scenes) in a scene and stick "pins" in them. Then you can animate the the position of the pins in order to move or distort the graphics.
Not Dead, But Resting
I haven't updated the site for more than a year because I've been too busy with other projects. I do intend to return to moing at some point in the future, however. In the interim I'll probably continue posting design ideas.
At this point, based on my experience with the immature state of FRP (Functional Reactive Programming), I'm not as sold on the idea of using Haskell for this project as I once was. Darcs has also basically failed as an SCM and the Haskell world is now migrating to git, so even if I continue with Haskell we will be switching SCMs.
One language I've been considering lately as an alternative to Haskell is ATS, a dependently-typed functional programming language which achieves comparable performance to C++. I love the language's semantics, but the biggest thing holding me back at present is that I'm not a fan of the syntax. So, we shall see. Whilst moing development is in hibernation I'll be exploring various language alternatives.
At this point, based on my experience with the immature state of FRP (Functional Reactive Programming), I'm not as sold on the idea of using Haskell for this project as I once was. Darcs has also basically failed as an SCM and the Haskell world is now migrating to git, so even if I continue with Haskell we will be switching SCMs.
One language I've been considering lately as an alternative to Haskell is ATS, a dependently-typed functional programming language which achieves comparable performance to C++. I love the language's semantics, but the biggest thing holding me back at present is that I'm not a fan of the syntax. So, we shall see. Whilst moing development is in hibernation I'll be exploring various language alternatives.
Labels:
ATS,
darcs,
development,
FRP,
git,
haskell,
implementation
October 31, 2007
Tabs on the Asset Pane
I talked with John at length about the asset pane tonight, and we arrived at the conclusion that (since the intent is for most of the asset management to be done via tags) there's not a lot of value having a tab for each asset directory. So we'll probably have two tabs or so: "All" and "In Use". "All" would be sorted by modification time, "In Use" would be sorted from most to least recently used.
August 27, 2007
Designing Interfaces
Designing Interfaces: Patterns for Effective Interaction Design is an intermediate-level book about interface and interaction design, structured as a pattern language. It features real-live examples from desktop applications, web sites, web applications, mobile devices, and everything in between.The web site has a number of the patterns from the book online.
The First Toys
The first few toys we'll be implementing are for the sake of figuring out how the input infrastructure should work:
- raw-events - dumps the window's input events to stdout, demonstrating event categorization [DONE]
- state-machine - implements a simple button widget, demonstrating state machines for input handling [DONE]
- wrangle-items - create, delete, and drag items on the canvas, demonstrating nested state machines for processing higher-level UI operations
- ksketch-dongle - an implementation of KSketch's object manipulation dongle, demonstrating the basic canvas infrastructure
- segment-editor-prototype - a prototype of the segment editor, demonstrating a scrolled canvas
runhaskell Setup.hs configure && runhaskell Setup.hs build
) and then running:dist/build/moing-toys/moing-toys toy-name
August 16, 2007
iMovie '08
iMovie '08 is out, and Macworld has a first look up.
Mike's already proposed that we do segments in the timeline as film strips, and now that I've seen it in action in iMovie, I'm sold on the idea. Skimming also seems like a much more valuable way of auditioning clips than the simple realtime previews we had been considering for Moing thus far.
Other than that, though, I'm not very impressed.
Mike's already proposed that we do segments in the timeline as film strips, and now that I've seen it in action in iMovie, I'm sold on the idea. Skimming also seems like a much more valuable way of auditioning clips than the simple realtime previews we had been considering for Moing thus far.
Other than that, though, I'm not very impressed.
August 5, 2007
State Machines for Interaction
Haskell invariably offers a challenge when writing interactive code, since (unlike for most pure functional code), there aren't many nice abstractions available in the hierarchical libraries for doing interactiony things. One of the abstractions I'd like to investigate for handling input in Moing are state machines: a state machine consumes Gdk events, produces higher-level events which it feeds to another layer of state machines, and so on.
Toys for Development
After more than forty posts, we're nearing the point where we need to start investigating and proving ideas through prototyping. Some of this will be in the form of paper prototypes, and some of it will be in the form of toys.
Toys are an idea I'm taking from lib2geom, where Nathan instituted them as the preferred development approach and they have contributed a lot to lib2geom's success: essentially, a toy is a very simple program which demonstrates a specific concept and allows interactive experimentation. They're also a good supplement for unit tests, when dealing with large-scale properties of code that are difficult to write unit tests for, or cases where the desired behavior isn't even clear yet.
One of the temptations with toys is to treat them as a first draft of the main code itself: since lib2geom is a library, it's been less of a temptation there, but I imagine it's going to require more effort to resist with Moing. So I'd like to lay out several ground rules for toys:
Toys are an idea I'm taking from lib2geom, where Nathan instituted them as the preferred development approach and they have contributed a lot to lib2geom's success: essentially, a toy is a very simple program which demonstrates a specific concept and allows interactive experimentation. They're also a good supplement for unit tests, when dealing with large-scale properties of code that are difficult to write unit tests for, or cases where the desired behavior isn't even clear yet.
One of the temptations with toys is to treat them as a first draft of the main code itself: since lib2geom is a library, it's been less of a temptation there, but I imagine it's going to require more effort to resist with Moing. So I'd like to lay out several ground rules for toys:
- Toys must be focused: they should cover a narrow vertical slice of functionality relative to the intended functionality of the main code
- Toys get their own section of the source tree
- No code from a toy can go into the main portion of the source tree without being rewritten
- However, toys are allowed to use code from the main program, and toys can be refactored within the toys section of the codebase to create a "toy framework"
darcs versus git
At the moment, I'm using darcs for Moing. While I don't hate darcs (and I rather admire its model of patch commutation), I still prefer git for most things, particularly when working with branches. I'm going to try and stick with darcs for the sake of compatibilty with the Haskell community, but if things get too uncomfortable we will probably end up switching to git.
Realtime Considerations
One of the things that most (non-professional) media applications don't seem to take into account is that playback/editing is really a realtime problem, and needs to use soft realtime approaches if it's not going to suck. While I'm not going to worry about it in the initial implementation, soft realtime is definitely a direction I want to go with Moing for the sake of getting reasonably smooth playback.
Since lazy evaluation, garbage collection, and even unconstrained memory allocation are all inimical to realtime programming, that means we can't get there in Haskell. But I don't want to try and write the entire application with realtime constraints in mind. Rather, I'd want to section off a small realtime portion of the code in a language other than Haskell (probably a carefully-chosen subset of C++) and run it in separate thread(s). The two halves would be connected by (mostly) nonblocking communication channels; the realtime half would simply blindly work its way through a local "script" for the scene with all the timing and positioning information; the non-realtime half would control it by sending incremental updates to that script across a channel. This is roughly the same approach which Rosegarden uses with its separate realtime sequencer process, and I think it's a much better idea than trying to write the entire application under realtime constraints (which gets harder and harder the more code is involved).
That reminds me of a particular disagreement I had with a certain realtime programmer (I hope he is not representative of his field) about media programming. Aside from my seeing no need to write the entire application under realtime constraints, there was also the issue of what should happen if deadlines are missed: yes, if you're missing deadlines it's a bug in principle, but this isn't a critical situation where you have to self-destruct the rocket if the control system goes non-linear on you. In fact, in practice, when we're talking about media apps, the amount of work the user may give the realtime portion to do is unbounded, so if the user makes a complex enough document you're going to miss some deadlines no matter what you do. So, for media at least, I think you want to be able to handle missed deadlines gracefully, resynching as time allows, rather than crashing, entering an unusable state with buffers full of garbage, or letting work pile up indefinitely. Those latter three are not acceptable options.
Since lazy evaluation, garbage collection, and even unconstrained memory allocation are all inimical to realtime programming, that means we can't get there in Haskell. But I don't want to try and write the entire application with realtime constraints in mind. Rather, I'd want to section off a small realtime portion of the code in a language other than Haskell (probably a carefully-chosen subset of C++) and run it in separate thread(s). The two halves would be connected by (mostly) nonblocking communication channels; the realtime half would simply blindly work its way through a local "script" for the scene with all the timing and positioning information; the non-realtime half would control it by sending incremental updates to that script across a channel. This is roughly the same approach which Rosegarden uses with its separate realtime sequencer process, and I think it's a much better idea than trying to write the entire application under realtime constraints (which gets harder and harder the more code is involved).
That reminds me of a particular disagreement I had with a certain realtime programmer (I hope he is not representative of his field) about media programming. Aside from my seeing no need to write the entire application under realtime constraints, there was also the issue of what should happen if deadlines are missed: yes, if you're missing deadlines it's a bug in principle, but this isn't a critical situation where you have to self-destruct the rocket if the control system goes non-linear on you. In fact, in practice, when we're talking about media apps, the amount of work the user may give the realtime portion to do is unbounded, so if the user makes a complex enough document you're going to miss some deadlines no matter what you do. So, for media at least, I think you want to be able to handle missed deadlines gracefully, resynching as time allows, rather than crashing, entering an unusable state with buffers full of garbage, or letting work pile up indefinitely. Those latter three are not acceptable options.
HUD Cues
To the extent that Moing has different modes in its UI, I'd like to make them very obvious. If we fail to do that, users are going to lose track of what mode they're in or even necessarily realize that they've entered a different mode when they click something in the interface. To that end, I think most mode changes should be reflected prominently in the editor pane as "HUD" information.
Two of the most significant modes are the recording and pose modes. Recording mode is indicated with a red border around the canvas (an idea taken from k-sketch), and pose mode is indicated with a yellow (or possibly blue, for the sake of colorblindness) border, as well as the name of the pose in largeish letters of the same color.
More minor information (use hints, etc.) that doesn't quite rate the HUD appears in the status bar instead, more or less the way we do it in Inkscape.
Two of the most significant modes are the recording and pose modes. Recording mode is indicated with a red border around the canvas (an idea taken from k-sketch), and pose mode is indicated with a yellow (or possibly blue, for the sake of colorblindness) border, as well as the name of the pose in largeish letters of the same color.
More minor information (use hints, etc.) that doesn't quite rate the HUD appears in the status bar instead, more or less the way we do it in Inkscape.
The Transport Bar
At the bottom of the editor panel is the transport bar, which has a few basic buttons: skip back, step back, play/stop, step forward, skip forward, and record. The skip buttons skip forward or back to the nearest target frame (other than the current one), step moves the playback cursor frame-by-frame, and play and stop do the obvious. Record is like play, but puts us in recording mode, where the animation plays back and any edits performed during playback are recorded as keyframes and target frames.
It is possible that we might not need a distinct "record" mode, but simply always record changes if any are made during playback. I'll have to think about that one some more.
It is possible that we might not need a distinct "record" mode, but simply always record changes if any are made during playback. I'll have to think about that one some more.
August 4, 2007
The Playback Cursor
Like most timeline-based apps, Moing should have a playback cursor on the timeline. The Moing timeline would have a "scale" divided in two parts: one with the triangular playback head which can be dragged, and the part below it with the scale markings and time or frame numbers:
Below that would be the normal timeline stuff. When the editor panel is maximized, this scale portion of the timeline would appear at its bottom to make up for the lack of the timeline pane.
The current position of the playback cursor determines the moment in time shown in the editor panel, and thus which target frame changes made will affect (or the time at which a new target frame will be created, if there's not already a target frame for that object at the current position). Besides determining the moment in time currently shown in the editor pane, the position of the cursor also determines the position at which playback or recording starts.
If you hold down shift while dragging the timeline cursor, then it selects a range of time. The entire vertical area of the selected range would be shaded on the timeline, but it'd be most evident on the plain region above the scale. This will limit the duration of playback or recording to just the selection range. Static editing while a range is selected results in a pair of target frames on either end of the range.
Holding down control while dragging the timeline cursor snaps it to the nearest segment boundary, or to nearby target frames if target frames are currently being shown in the timeline.
(Dragging segments while holding control always snaps them to each other's ends.)
Below that would be the normal timeline stuff. When the editor panel is maximized, this scale portion of the timeline would appear at its bottom to make up for the lack of the timeline pane.
The current position of the playback cursor determines the moment in time shown in the editor panel, and thus which target frame changes made will affect (or the time at which a new target frame will be created, if there's not already a target frame for that object at the current position). Besides determining the moment in time currently shown in the editor pane, the position of the cursor also determines the position at which playback or recording starts.
If you hold down shift while dragging the timeline cursor, then it selects a range of time. The entire vertical area of the selected range would be shaded on the timeline, but it'd be most evident on the plain region above the scale. This will limit the duration of playback or recording to just the selection range. Static editing while a range is selected results in a pair of target frames on either end of the range.
Holding down control while dragging the timeline cursor snaps it to the nearest segment boundary, or to nearby target frames if target frames are currently being shown in the timeline.
(Dragging segments while holding control always snaps them to each other's ends.)
Subscribe to:
Posts (Atom)