I've been trying to pull inspiration for the design of the
OIWL widget library from a few places. One set of places, of course, is other widget libraries that I use (like wxpython and gtkmm) or admire (like android.widget). For example, many libraries that I'm fond of use semi-automatic placement of widgets within layouts, so I decided to use layouts pretty liberally in my design.
I'm having difficulty deciding how to handle events though. Here's some notes:
- Only Frames (i.e. MIDP Canvas objects) actually receive user events. A frame must then pass information of the event along to its Widget objects.
- Some events should block others. For example, if I'm going to flick a frame, I don't want buttons to think that I'm looking to click them. Likewise, if I'm clicking a button that's on top of another, I want to be clicking the top button and not both of them (notwithstanding the fact that my buttons shouldn't be overlapping).
- I'm thinking a WidgetParent should pass user events down to its direct children. Its children can choose to handle the event and block others from handling the event (return true) or not (return false). If none of its children blocks the event, then the parent can then choose to handle and block or not. Seems reasonable every time I think about it. I think it's simple enough if all the events that one cares about are pointer presses, drags, and releases. However, there are still issues:
- Consider "complex" events like clicking (i.e. a pointer press and release without leaving the widget or without moving from the same spot), double clicking, and flicking (i.e. pointer press followed by drag and a pointer release while the pointer is still moving).
- More concretely, consider the case of a button on a frame or widget that can scroll. There are instances when it's infeasible to require the user to press a portion of the screen where there are no widgets in order to drag the frame -- like in the case of a list frame, in which the entire view is covered by what are essentially buttons (list items). How then would we distinguish the first part of a button tap (pointer down) from the first part of a view scroll (pointer down)?
But maybe the question answers itself. Both a button tap and a view widget scroll should be assumed to be started by the pointer press. However, when the pointer starts to drag, the view widget should check whether the button wants to handle the drag event. When the view widget learns that the button does not want to handle the drag, it should assume that it should begin scrolling and tell the button (and all of its child widgets) to cancel whatever in-progress events they have.
So I'll try implementing this model: When the Frame receives a
pointerPressed, it calls its layout's
handleEvent method with the event type (
Event.PRESSED) and the location data. The layout will then pass the event and data along to all of its children's
handleEvent methods.
The first thing a given widget will do upon receiving the pointer press notification is check whether its children can satisfy an event with it. If one of them can, the widget will tell all of its other children to cancel their in-process events and it will return true.
If none of a widget's children can satisfy an event with a given user input, then the widget has three options. If the widget doesn't care about pointer pressed input, then it will simply and immediately return
false. If a widget needs the
Event.PRESSED to start an event (like a button tap), then it will mark that event as started, but still return
false. If a given widget can complete an event with the
Event.PRESSED, then it will cancel all of its children's in-process events and then do whatever it does on that event and return
true.