technologies, including use of an object-oriented database on a large scale. It was exciting. People
on the team would proudly tell visitors that we were deploying the biggest database this
technology had ever supported.
When I joined the project, different teams were spinning out
object-oriented designs and storing their objects in the database effortlessly. But gradually the
realization crept upon us that we were beginning to absorb a significant fraction of the database's
capacity—with test data! The actual database would be dozens of times larger. The actual
transaction volume would be dozens of times higher. Was it impossible to use this technology for
this application? Had we used it improperly? We were out of our depth.
Fortunately, we were able to bring onto the team one of a handful of people in the world with the
skills to extricate us from the problem. He named his price and we paid it. There were three
sources of the problem. First, the off-the-shelf infrastructure provided with the database simply
didn't scale up to our needs. Second, storage of fine-grained objects turned out to be much more
costly than we had realized. Third, parts of the object model had such a tangle of
interdependencies that contention became a problem with a relatively small number of concurrent
transactions.
With
the help of this hired expert, we enhanced the infrastructure. The team, now aware of the
impact of fine-grained objects, began to find models that worked better with this technology. All of
us deepened our thinking about the importance of limiting the web of relationships in a model, and
we began applying this new understanding to making better models with more decoupling between
closely interrelated aggregates.
Several months were lost in this recovery, in addition to the earlier months spent going down a
failed path. And this had not been the team's first setback resulting from the immaturity of the
chosen technologies and our own lack of experience with the associated learning curve. Sadly, this
project eventually retrenched and became quite conservative. To this day they use the exotic
technologies, but for cautiously scoped applications that probably don't really benefit from them.
A
decade later, object-oriented technology is relatively mature. Most common infrastructure needs
can be met with off-the-shelf solutions that have been used in the field. Mission-critical tools come
from major vendors, often multiple vendors, or from stable open-source projects. Many of these
infrastructure pieces themselves are used widely enough that there is a base of people who
already understand them, as well as books explaining them, and so forth. The limitations of these
established technologies are fairly well understood, so that knowledgeable
teams are less likely to
overreach.
Other interesting modeling paradigms just don't have this maturity. Some are too hard to master
and will never be used outside small specialties. Others have potential, but the technical
infrastructure is still patchy or shaky, and few people understand the subtleties of creating good
models for them. These may come of age, but they are not ready for most projects.
This is why, for the present, most projects attempting
MODEL-DRIVEN DESIGN
are wise to use
object-oriented technology as the core of their system. They will not be
locked into an object-only
system—because objects have become the mainstream of the industry, integration tools are
available to connect with almost any other technology in current use.
Yet this doesn't mean that people should restrict themselves to objects forever. Traveling with the
crowd provides some safety, but it isn't always the way to go. Object models address a large
number of practical software problems, but there are domains that are not natural to model as
discrete packets of encapsulated behavior. For example, domains that are intensely mathematical
or that are dominated by global logical reasoning do not fit well into the object-oriented paradigm.
Do'stlaringiz bilan baham: