I
haven’t written any ‘crazy idea’ posts for months, so I figured I must
be due. Over the years I’ve been playing around with various ‘dynamic’
behavior in the code as a way of maximizing reuse. The fundamental
design has been to encapsulate the domain issues within a simple DSL,
then drive both the interface and the database dynamically.
On
the interface side, I’ve use declarative dynamic forms as the atomic
primitive for the screen layouts. This allows me to dynamically alter
the navigation as well as reduce the amount of code required to define a
screen and to persistent user-constructed screens and workflows.
This
type of paradigm is too expressive for basic relational database usage,
so initially I built a key/value NoSQL-like (not distributed) database.
For another attempt I wanted the external connectivity of an RDBMS, so I
went with Hibernate and a long-skinny generic schema. The earlier
attempt was significantly less code and easier to use, but the later
attempt allowed for reporting and integration once I wrapped Hibernate
with an OODB like interface.
Driven
by a DSL, these systems have been very flexible, allowing the users to
essentially move into the system and adapt it to their specific needs.
The downside has been that the abstractions involved require fairly deep
thinking about extending the systems. Most programmers prefer writing
new code via brute force, so the speed of development is limited by
finding people who won’t just hack madly around the existing code base,
but are willing to read and reuse the infrastructure.
In
thinking about these types of resource issues my feeling is that the
kernel of any such architecture should be as small as possible and need
very little modifications. Growing the system then is a matter of just
inserting domain-specific algorithms and features into a predefined
location in the architecture. That almost works, but in my last version I
really ended up with three different places where the code had to be
extended. With three different choices, I found that some programmers
would pick the wrong location, slam their square peg into the round
hole, and then try to compensate by shoving in lots of extra, artificial
complexity. Choice it seems leads to people wanting to ‘creatively’
subvert the architecture.
My
thinking these days (although it may be awhile before I get a chance to
try it out) is that I want the extendability of the system to come down
to one, and only one place. Taking away choice may sound mean, but I’ve
always found it better to balance out programmer freedoms with system
success. If too much freedom incurs a massive risk of failure, well… I’d
rather the system really worked properly at the end of the day. It’s a
happy user vs. a happy programmer tradeoff.
As
well as encapsulating the extendability I got to thinking that the next
wave of computing is going to take place on a nearly infinite number of
form factors. That is, the screen size will vary from being watch size
all the way up to wall size. It doesn’t make a lot of sense to write a
huge number of nearly identical systems -- one for each size -- if we
can enlist the computer to dynamically handle them for us.
ORMs
and OODBs allow for the programmers to specify their internal data
models, then have these drive the persistent storage structures. The
slight wrinkle is that the persistent storage may be shared across
several different applications, so it’s underlying model is likely a
domain-driven ‘universal’ one instead of the various application
specific models. Subsets and inherent context are likely the bulk of the
differences.
Without
worrying too much about the model differences, the other half of the
dynamic equation is for that application model to directly drive the
interface layout. Way back many companies tried to drive interfaces off
relational schemas, but these systems proved too awkward and cumbersome
to catch on. My sense is that the application modelling of the data
needs to be driven heavily from the user/navigation side, rather than
the storage side. That is, the application model reflects both the
domain structure of the data, but also how the users want to manipulate
that data.
If
we can find an appropriate intermediate representation then the rest of
it is easy. For each entity/datam in the model we attach a presentation
template. To cope with the form factor free ability we attach links
that handle both navigation and neighborhood relationships. When the
user navigates to a screen, we get both the primary entities and the
current form factor. From that we simply find anything in the
neighborhood that fits in as well. Go to the user’s screen and if your
screen is big enough, you’ll see all sorts of related information. Of
course one has to deal with paging, dynamic navigation as well as
widgets, validation and normal dynamic forms problems like cross-field
validations/updates. The base problems in my earlier systems weren’t
simple, but they weren’t really cutting edge either.
One
time-saving possibility is that the screen construction happens at
compile time rather than run time. Building the system would then
produce many different components -- one for each form factor. It would
be nicer to do this dynamically on-the-fly, but one always has to be
weary of eating up too much CPU.
If
it all worked as planned (it almost never does), extending the system
is just extending the application data model. If you needed to add a new
feature you’d start by integrating any new data into the model. New
calculations would go in by adding new ‘derived’ entities which would be
bound with calculations underneath. All of the presentation/navigation
stuff would decorate the data, then all you’d need to do is just
recompile, test and re-release. Changes that might normally take months
could fall to weeks or days. The model intrinsically enforces any types
of organization or conventions and can easily be reviewed by other
programmers. With the extendability encapsulated, the base work would
pay off in producing a system that could expand for years or decades
without having clocked up much technical dept.