time conversation
with a short message, delivered when convenient
for you, and then read when convenient for the recipient.
To many, this asynchronous approach to communication seemed
strictly more efficient. One technology commenter I came across in
my research compares synchronous communication—the type that
requires actual conversation—to an outdated office technology like
the fax machine: it’s a relic, he writes, that “will puzzle your
grandkids” when they look back on how people used to work.
18
The problem, of course, is that email didn’t
live up to its billing
as a productivity silver bullet. The quick phone call, it turns out,
cannot always be replaced with a single quick message, but instead
often requires dozens of ambiguous digital notes passed back and
forth to replicate the interactive nature of conversation. If you
multiply the many formerly real-time exchanges now handled
through
multitudinous messaging, you get a long way toward
understanding why the average knowledge worker sends and
receives 126 emails per day.
19
Not everyone, however, was surprised by the added complexity
of drawn-out communication. As email was taking over the modern
office, scholars in the theory of distributed systems—the subfield of
computer science that I study in my academic research—were also
examining the trade-offs between synchrony and asynchrony. As it
happens, the conclusion they reached was exactly the opposite of the
prevailing consensus in the workplace.
The synchrony-versus-asynchrony
issue is fundamental to the
history of computer science. For the first couple of decades of the
digital revolution, programs were designed to run on individual
machines. Later, with the development of computer networks,
programs were written to be deployed on multiple machines that
operated together over a network, creating what are called
distributed systems. Figuring out how
to coordinate the machines
that made up these systems forced computer scientists to confront
the pros and cons of different communication modes.
If you connect a collection of computing machines on a network,
their communication, by default, will be asynchronous. Machine A
sends a message to Machine B, hoping that it will eventually be
delivered
and processed, but Machine A doesn’t know for sure how
long it will be until Machine B reads the message. This uncertainty
could be due to many factors, such as the fact that different machines
run at different speeds (if Machine B is also running many other
unrelated processes, it might take a while until it gets around to
checking its queue of incoming messages),
unpredictable network
delays, and equipment failures.
Writing distributed system algorithms that could handle this
asynchrony turned out to be much harder than many engineers
originally believed. A striking computer science discovery from this
period, for example, is the difficulty of the so-called
consensus
Do'stlaringiz bilan baham: