Google's interactive search logo today provides a neat analogy for
interactive visualization, don't you think??
It reminds me that in my slideset "towards
a layout of the software visualization zoo" I am missing dynamic
visualization. No, I don't mean visualization of system dynamics or behavior, I
mean interactive visualization. For example, pan and zoom. The other thing that
I haven't provided examples of, so it may seem to be missing, are tools that
cover many facets of the "zoo" -- the CAE/round-trip engineering/MDE sorts of
tools that have multiple views. I am missing, though, another dimension, also
related to dynamic visualization -- that of navigating and tracing threads of
reasoning that weave among and between views.
This is an interesting research stream: bringing models closer to code,
bringing together MDE and DSLs and more.
2/8/11 Enterprise Architecture Business Briefing: USA-Vietnam
"USTDA is sponsoring a 14-day Reverse Trade
Mission for officials from the Ministry of Information and Communications (MIC)
in Vietnam. ...
The Reverse Trade Mission will introduce EA
models and best practices to support DIAP’s efforts to design a government wide
policy standard based on specific industry technology solutions. In addition,
the visit will facilitate progress in e-government adoption by addressing
technical and interoperability issues, as well as encourage unified planning and
procurement approaches. The delegation’s priorities will present opportunities
for U.S. suppliers interested in Vietnam’s ICT sector."
-- Business Council for International Understanding,
Last night I watched Ralph
Johnson on InfoQ, which reminded me to look in on Jose Fernandez/PPOOA.
Speed matters. Well, we've been living in a world where "best effort
engineering" is generally good enough, and when its not we focus then on tuning
up that dimension (or throwing more hardware at the problem). At the same time,
though, our expectations just keeping getting higher, increasing the demands on
performance and increasing complexity (making performance a harder game). So, we
just have to get better at designing to meet key system properties, which means
getting a better handle on what those are. You are no doubt familiar with the
work the SEI has here, but Tom
Gilb's (of Evo/iterative development fame) book on Competitive
Engineering is important too.
Anyway, performance... taking advantage of multi-core... I need to put together a list like this one for coupling and cohesion and related concerns and this one for system failure and this one for visualization tools (with an emphasis on software visualization, but with
broader scope so we don't neglect precedent being developed in other fields, or
the importance of learning by analogy) for performance and especially parallel
programming concerns. (And then, because I was foolish enough to create the
lists, I need to update them... For example, I need to add Grady Booch's IEEE
Software article on system failure (and his reading of it) to the system failure list!)
Don Wilson makes a point in this slideset (from 2005, but the point pertains) that we need to
close the gap between industry and academic research. Which is not to say that
academic research should be less frontier pushing, but only that industry needs
to be more curious and investigative and researchers need to build bridges to
This is interesting:
"The central objective for the ArtistDesign
European Network of Excellence on Embedded Systems Design is to build on
existing structures and links forged in the FP6 Artist2 NoE, to become a virtual
Center of Excellence in Embedded Systems Design. This is mainly achieved through
tight integration between the central players of the European research
community. These teams have already established a long-term vision for embedded
systems in Europe, which advances the emergence of Embedded Systems as a mature
discipline." -- ARTIST Network of Excellence on Embedded Systems Design
The ToC and random readable pages of the ARTIST Roadmap for Embedded System Design Research provides a little window.
This virtual Center of Excellence notion is exciting/compelling.
This address by Jim March is quite splendid! He ends with this
"Humans are perishable. That
may be; but let us perish resisting, and, if nothingness is what
awaits us, let us not act in such a way that it would be a just
fate." -- Étienne Pivert de Sénancour, Obermann, 1804, letter 90
Yes, splendid! Though I have... my nuanced views. I, for example, think that we
harness complexity not by ignoring it, nor by dismissing it, but by entertaining
it, embracing it, and finding within the "contradictions, paradoxes,
ambiguities, and ambivalences" the structures and patterns that may be inspired
or simply just good enough to get going, knowing we'll improve and do better. So our rhetoric of "simplicity, clarity, consistency, and certainty" can be
authentic and enabling, though it is built on the premise that it is not static
and unyielding to the lessons of time. Of course, it often is static, and
becomes more so. But that's just because we struggle to accommodate our
I was sorry that Jim ended by applying his insights to education rather than
leadership in organizations more generally. Of course, he was speaking to
leaders in education transformation, but the introduction and body of his talk
addresses leaders in business... And I think that generally, not just in
education but in business, a revolution is overdue. I believe that it is
important to see how the ways we follow our Bliss on and off the job informs and
improves what we do at work. I think that was the genius (or genie) in Steve
Jobs' commencement address. The notion that his encounter with art and beauty
(in calligraphy) at Reed transformed our world -- and has Nokia looking at
jumping into the figurative chill sea -- is huge! Everyone has their different
proclivities and places their Bliss leads them, and enriches their perspective,
their aesthetic sensibilities and what they bring to design (of organizations,
of systems, and even, if we are confined there, of the soft guts of machines).
A "Renaissance man" like Leonardo da Vinci was amazingly inventive because he
brought much together within himself. Ok, so da Vinci is known for intellectual
frontiers he forged, and for masterpieces he completed -- and for starting a lot
more than he finished. Well, if we began as many and finished as small a
proportion that were as great as the works of da Vinci, we'd stand the test of
history just as well!
Jim March, too, has been termed a Renaissance man. I have no doubt of the
fitness of the appellation. Still,
for me, for now, I'll just call him a muse -- a great one. Which is to say he
challenges me to wrestle where he rouses discomfort in me, for it is there that
the boundaries of my thinking are stretched (where I agree with him, he's
putting words to, adding to and confirming what I sense or see -- more elegantly than I and
very useful to me but still within my zone of comfort; where he disquiets me
there is new ground to hoe). Ok, so let me back up just a moment -- I mentioned
Miyazaki's Howl's Moving Castle in a post I eliminated, so I'll refresh
at least that observation:
One of the key insights that Howl's Moving Castle magically conveys, is that people are complex bundles of
good and evil, strength and weakness. Good people are such bundles, just with
more good than bad, and in their internal battles between good and evil, the
good generally wins out. The contrary for evil people. And good and evil in the
world is the externalization of these internal battles. And the reaching out of
good in one person magnifies and assists the forces for good in another, and
changes the balance of their internal battle. When looked at that way, there's
so much more scope for compassion and empathy! And so much more reason to reach
At first I wondered at the protagonist as cleaning woman... and then realized
Miyazaki has a lovely way of seeing women, and this serving role is emblematic
of what women are to others -- not servile, but serving, and though humble it
can be a major force for greatness in self and others. Here is another view,
from my reaction to Nausicaä of the Valley of the Wind:
Isn't it just incredible that
11 year old American girls have as their hero a 70 year old Japanese man? But
not just any. One who made girls his protagonists (earning him the "feminist"
appellation) and who makes pacifism and environmentalism key themes. One who has
a penchant for flying machines, but puts girls on them, letting them soar! One
who blends literature and visualization in an exquisite art form that lets the
imagination defy and so lead reason.
In Jim March's Leadership movie, Don Quixote's "I picture her in my imagination as I would
have her be" (minute 24:15) is applauded. Well, to me, the most moving
declaration of love is one that comes from knowing intimately the bundle of
a person's complexity and compromise, and seeing her (or him) with a joyfully
inclined spirit that hallows all that is beautiful in her. "As I would have her
be" asserts one's self onto another person. Which is to say, I think it more
amazing still to be known -- seen -- to be utterly remarkable (in the ways that
one is), than to be imagined to be so (in the ways another desires). Known to be more than the lover can even imagine, based on what the lover sees with an
open-hearted (susceptible to her) mind! [I do love my husband who taught me that
to be so seen is an expansive gift, affirming my striving to be more myself.] I think that too is a message in Howl's Moving Castle, in which the protagonist comes to be loved not in
spite of her appearance but for the best in her, which includes what she is capable of being and becoming.
Her appearance is irrelevant to those who come to love her, and she becomes
as beautiful as they know her to be. She manifests herself.
Life is limited (time limited, resource limited, opportunity limited, etc.),
which forces compromise. And yet there are ways in which we do and are the
remarkable. We have to be able to embrace this duality within ourselves and
others, in our products and in our organizations.
"The test of a first-rate intelligence is
the ability to hold two opposed ideas in mind at the same time and still retain
the ability to function." -- F. Scott Fitzgerald
So, a leader doesn't act as if the situation isn't messy and ambiguous
and full of contradictions; the leader acts on the ways in which it is
not. The leader acts on the simplicity, clarity, consistency, and certainty that
she or he has seen, embracing the knowledge that there is difficulty and
ambiguity and uncertainty in much, but this thing that must be done is seen as
such. Compelling, even if it much remains to be clarified. That is what the
leader puts words and images to, inviting others to lend their minds and voices
to building clarity in the vision and the components we hack and hew and craft
to build it out.
From time to time, Bill Gates reviews a book he's read. This from Bill's
review of A Rational Optimist:
"Like many other authors who write
about innovation, Mr. Ridley suggests that all innovation comes from new
companies, with no contribution from established companies. As you might expect,
I disagree with this view. He also seems to think that innovation involves
simply coming up with a new idea, when in fact the execution of the idea is
critical. He quotes the early venture capitalist Georges Doriot as saying that
as soon as a company succeeds, it stops innovating. A great counterexample is
Intel, which developed over 99% of its breakthroughs after its first success."
-- Bill Gates, Africa Needs
Aid, Not Flawed Theories, 11/30/10
It is true that many innovations, incremental and revolutionary, come from
industry incumbents. Of course, given how much resources they pour into R&D and
product development, one would hope so! Further, while small companies and
start-ups are an important engine of innovation, their mortality rate is high.
That said, it astonishes me that innovation so often happens right under the
noses of big companies, hidden in plain sight because big companies tend not to
be very curious about those who aren't on their radar. The key, of course, is
not always to be first, but to be early to the party. So when we see companies
having to buy their way in to innovations, we're seeing companies that aren't
cluing in to innovations as they are forming a new space in the market. Focus is
selective vision. But so is arrogance. And it can be hard to tell these apart.
You've no doubt read the Nokia "burning platform' story and the CEO's letter.
In Idealized Design, Russell Ackoff recounts a story from his formative
experience as a systems thinker and designer, in which the VP of Bell Labs
came in to an all-hands meeting and said the entire telephone system had
been wiped out and had to be redesigned from scratch.
Ok, so this is from what that VP then said in the 1950's:
Think that (the quotes are from Idealized Designby Russell Ackoff, Jason Magidson and Herbert
Addison) is what Nokia CEO Elop will say next? We never focused on (redefining) the (eco)system as a whole?
Well, it's one thing when you're on top, instigating landscape shifting
innovation. It's quite another when you look around and see "the platform
burning." But when the options are to bed down with Google or Microsoft, the
times are ...interesting... The burning question though, is whether jumping
platforms is enough. Nokia has been a leader in emerging markets, but
what has Nokia done to lead the development of (eco)systems in those
markets? I don't know, so that is an honest and innocent question. But it is
an interesting area; what is its relationship with, for example,
microfinance? (Gates' take on mobile
phones and banking in Africa bears reading.) Etc. Addressing sustainability
in the greater ecosystem sense lends sustainability to the systems that are
key relationship brokers within those ecosystems. Will Nokia plug in to
another's relationship platform, and if so does this conscribe Nokia simply
to commodity provision? Is there a way to make a distinguishing relationship
base on top of, for example, the Android platform? If not, is there a way
for Nokia to create distinctive "objects of desire" clearly separated from
others in a mobile device space where a device creates/designates/conveys
social status (not just among one-upping geeks, but in Africa, for example,
where a mobile -- and which one -- is a prized symbol of
Ok, so I'm running with a bit of tongue-in-cheek myself, suggesting that
there is a parallel between Elop's letter and Ackoff's story. Naturally I'm
tuned in to the double entendre in platform and it's a gutsy piece of
"leadership as performance art" in this age when nothing that interesting
stays confidential, so was surely anticipated to hit the world stage
stimulating exactly this interest in advance of tomorrow's announcement!
Elop is using a dramatic ploy to get the company (and shareholders) to
accept rethinking its direction and identity, and the bigger point behind my
allusion is that I wonder if Nokia will really do what needs to be done in
terms of larger (eco)system design so that it shapes a landscape it leads
in. When Daniel shot me a heads-up on the Elop story, I facetiously
must be about to call on us, huh?" But just think what an opportunity they
miss not doing so! ;-)
Anyway, these pointers (via Daniel Stroe) are interesting;
By the time we see that we are standing on a burning platform the options are
all very dangerous. We have to bring design-to-delight and
design-for-sustainability into what we do now! And I do mean
sustainability in all its senses--including environmental. When we look around
and see this earth is a burning platform, there will be no ocean to jump into
and perhaps be rescued from!
Never despair of being able to make a contribution for the lack of
management mandate or charter. There are just so many that need to be made!
And most of them need to be made from where we each stand. A scant few
people lead in ways that are glory making, but we just don't, as a society,
have the attentional capacity nor the largeness/generosity of spirit for
according all that many people glory! Indeed, many will try to tear them
down as fast as they build the right to claim attention.
But we can touch gentle ripples of change into the world by knowing what we
stand for, what our principles are, and what we value. And making a difference
through our intentional interventions, seeming small but all lined up with what
we just know must be brought about in this world.
That said, I know it is frustrating when, for example, management
retrenches in severe cost-cutting mode in the face of a harrowing recession.
If you're in a market where some are spending aggressively on innovation
(especially when resource-rich companies in neighboring markets are spending
aggressively to make a new landscape of the market you're in), it is
wrenching to see blinkered cost-cutting strategies that snap off
innovation projects like branches in an ice-storm! Wrenching because the
writing is on the wall! If anyone in the market is heavily and creatively
investing in reshaping innovation when others are not so bold, chances are
they're going to create a new balance of power. At any rate, they're
investing in making a different future happen, while the cautious are simply
trying to make it through to see another day. The bold have excitement and
energy on their side, and in tech markets that right there goes a
considerable distance! There are no sure winners, but shift happens. To
those who pay for it with ideas and energy. If you're big, you have
to pay more, because ideas are heavier and take more energy to carry to the
finish line. Or something like that. ;-)
Hey, won't it be great if Nokia has its own new platform they're going to
unveil tomorrow??? Or some other redirection and all this hullaballoo was
just a head-fake to increase the surprise and drama? :-)
2/11/11 Surprise! Not! The Intel/Microsoft comparison is interesting... but it applies
to Motorola/Google too. And therein lies the rub. Apple is leading mindshare and
breaking into market share no-one would have anticipated. Microsoft doesn't
dominate in this space the way it dominated in the PC space, and while both
Microsoft and Nokia are innovative and have their outstanding and highly
complementary technical strengths, it is Apple that is consistently shaping
"It is hard to differentiate around the
form of the product. You have to differentiate on the experience."
-- Tom Hulme, IDEO
"In 2004, researchers at Nokia, the world’s leading mobile
phone company, presented a prototype of a new kind of mobile phone to the senior
management. The phone, which connected to the Internet, had a large bright
screen and was operated by fingers on a touch-screen. The researchers believed
that the device would be a winner in the fast-growing Smartphone market. Senior
management evaluated the proposal and decided that the risks of failure did not
warrant the costs: Nokia did not pursue development of the phone."
-- Steve Denning, From Trash Cans to Nokia: Is Creativity Innovation?, 2/17/11
Well, what isn't clear from that snippet, is whether Nokia researchers also were
thinking in terms of an App store and iTunes rival. Apple's genius was looking
beyond the device and seeing an opportunity to broker relationships within an
ecosystem (iTunes and then the App Store) not only for profit but to add to the
loyalty and sizzle factor. The iPhone isn't just a touch-screen phone with
internet access... [2/24/11: Ah, they were -- Ovi was created in 2003!!]
2/23/11: What went wrong?
"So Nokia had a Plan B, and it had a compelling developer
story. But it was too late. What killed Nokia's ambitions then was not
stupidity, but its bureaucracy." -- What sealed Nokia's fate? It's the bureaucracy, stupid,
Andrew Orlowski, 16th February 2011
I feel better and sick at the same time! I hate it when good
engineering is killed by bureaucracy!
"It's what political writers call the most
morally corrupting effect of bureaucracies: nobody takes responsibility. With
the three divisions covering their own backsides, nobody wanted to make the
long-term strategic investments necessary to keep platform software up-to-date.
This resulted in the Symbian user interface being neglected. Nokia had developed
a touch screen UI called Hildon, which became Series 90, starting in 2001 - and
that should have been the basis for Nokia's iPhone competitors today. But it was
canned in 2005."
-- When Dilbert
came to Nokia: Fascinating report shows how bureaucratic fear sealed
company's fate in 2003, Andrew Orlowski, 14th October 2010
Also related: Knock, Knock, Nokia's Heavy Fall... Knock, Knock, Nokia's Heavy Fall... (Part III), 10/5/10.
"My main lesson is that fumbling the future is very easy.
I have done it myself! The future looks clear only in hindsight. It is rather
easy to practically stare at it and not see it. It follows that those who did
make the future happen deserve double and triple credit. They not only saw the
future, but also trusted their vision to follow through, and translated vision
to execution. We should all recognize the incredible contributions of those who
did not fumble the future."
-- Moshe Y. Vardi, Communications of the ACM, Vol 54 No 3,
We should not forget that Nokia very much shaped the present, which is the
platform for the future. That is to say, putting visions in place is tenacious
work, and it is hard, in that context, to make the next vision wave happen
especially when it undoes the one that has and is being unfolded. Hard, but
enough exceptions prove that industry incumbents can bring "creative waves of
destruction" upon themselves, pre-empting other sources. And then there's the cheese lesson.
5/29/11 Mobile Moves
On Imitation/Applying Analogy from Biology and Evolution
M. Benyus, Biomimicry: Innovation Inspired by Nature, 2002
DeYoung, Derrik Hobbs, Discovery of Design: Searching Out the Creator's Secrets, 2009
Kelly, Out of Control: The New Biology of Machines, Social Systems, & the
Economic World, 1995
Brownlee, Clever Algorithms Nature-Inspired Programming Recipes, 2011
(free online or for purchase)
On Computing and the Changing Social Fabric
Roots: Free Association
"In America I encountered sorts of
associations of which, I confess, I had no idea, and I often admired the
infinite art with which the inhabitants of the United States managed to fix a
common goal to the efforts of many men and to get them to advance to it freely."
-- Alexis de Tocqueville, Democracy in
America (originally published in 1835-1840)
Society on Technology:
2/10/11 On Books
There are so very many books...
Afterwards, when she regained her
eyes, she read Shakespeare, and thought to herself, "Why is any other book
-- Thomas W. Higginson, of Emily Dickinson
"If I read a book and it makes my
whole body so cold no fire can ever warm me, I know that is poetry. If I
feel physically as if the top of my head were taken off, I know that is
poetry. These are the only ways I know it. Is there any other way?"
-- Emily Dickinson
"I think we ought to read only the kind
of books that wound and stab us...We need the books that affect us like a
disaster, that grieve us deeply, like the death of someone we loved more
than ourselves, like being banished into forests far from everyone, like a
suicide. A book must be the axe for the frozen sea inside us."
-- Franz Kafka
Jacobsen's Six Stories, Rilke offers a quite different
perspective, though he still inclines to selectivity:
"A whole world will envelop you,
the happiness, the abundance, the inconceivable vastness of a world. Live
for a while in these books, learn from them what you feel is worth learning,
but most of all love them. This love will be returned to you thousands upon
thousands of times, whatever your life may become - it will, I am sure, go
through the whole fabric of your being, as one of the most important threads
among all the threads of your experiences, disappointments, and joys."
-- Rainer Maria Rilke, letter 2
As for me, there are books that are utilitarian things that I dip and
dive around in, quickly finding what is useful to me.
And there are books I allow to enter me. To become part of my
consciousness. A slant to the lens I wear that I more clearly see; an
undoing, loosening affectation and corseting convention; a deepening in the
well of my empathy; a rousing to disquiet and urge me; an invitation to an
awed hallelujah in celebration of beauty in the created and creation.
And other stuff.
My brain freezes on lists, terrified knowing it will be incomplete and so
a showing of my limitations. Oh yeah. A reminder of the folly of conceit --
but I covered affectation... You see, that is what it is like to live with
my brain. Oh, do feel sorry for me! ;-)
2/11/11: This, from
personal diary (which he only kept for a few days) (linked from this page on the Rutgers Edison
Image source: Edison's Diary, on Rutgers The Thomas Edison Papers archive.
Another cool search logo today! Highlighting Edison's numerous
inventions and sketches, emphasizing their role in design! Way to go
And thanks to that prompting, this:
"For a long time very
little was known about Thomas Edison. Then, in 1978, a group of
historians from Rutgers University got together to form the Edison
Papers Project. Their aim was to compile, edit and publish Thomas
Edison's writings. Little did they realize what they were letting
themselves in for. The custodians of his record books have so far
discovered in the region of 3500 detailed note books and
approximately four million pages that the indefatigable Edison
poured his thoughts upon throughout his working years! And they are
notebooks give a wonderful insight into the processes Edison used to be so
prolific as an inventor. The researchers liken his abundance of ideas and
observations to those of the man voted "the Genius of the Millennium" - Leonardo
da Vinci. That's high praise indeed. The pragmatic might argue that Edison was
the greater for using his genius to be so usefully productive. However, a glance
through Edison's notebooks is like dancing with a tornado ... his mind spins all
over the place.
immediately apparent is that Edison's mind wandered through a vast spectrum of
unrelated projects in an apparent free flow of associations. This is a critical
point to understand. The mind grows through the number of connections it can
make. Genius finds relationships between the most diverse things."
one of Edison's inventions would spawn another in an unrelated field, which in
turn would give rise to another in a different area of interest. It's as though
by pushing, experimenting and thinking in one direction, Edison simultaneously
benefitted in all the other projects that he was working on. This points to the
concept of the holographic mind. Affect one part and you affect all the parts.
Nothing is wasted."
"Many of Thomas Edison's greatest
ideas only emerged after he had made hundreds of drawings and cartoons. General
Electric has a collection of Edison's sketches and doodles that he made about the electric light bulb. Most of them are undecipherable. But each of
them had meaning for Edison and moved his thinking along closer and closer to
-- Scribbling towards success! Why the pen is mightier!, The Brain Squeezers
The Rutgers site on Edison provides a wonderful window, allowing us to access Edison and his team's notes and sketches.
Some more careful, conveying the design, like this:
Others just sketches of design ideas like this:
Image source: Rutgers Edison
And others more sketchy still. Just grabbing the tail of an idea as it flew
by, and pinning it down in the notebook. But you can see from the two sketches
above, and the dates on them, that Edison and his team were experimenting -- on
Picking Ryan up from school, we overheard this conversation:
calling me a nerd?
worry, nerds are better than jocks. After high school, nerds rule.
Ryan wondered if this was true. I said, "think Microsoft, Google, Facebook,...
Pick your nerd." Ryan liked that.
This from Edison's
laughed heartily when I told her about a church being a heavenly fire-escape."
I laughed heartily at that (and more; Edison is witty; a little pompous at
times, but I think he earned our tolerance there)!
At Ryan's school's variety concert at the end of last year, a blonde and a
brunette MC'd part of the concert, playing the stereotype:
Brunette: What do you call a smart blonde?
Brunette: A golden retriever
Blonde: What do you call a
2/11/11 (Im)maturity Models
It occurred to me that maturity models (like CMM) induce, even if
unintentionally, the wrong value set. If we
looked at an organization mechanistically following its documented processes
we'd be encouraged to think it more mature. But hold on a moment! It takes more maturity, in the wisdom sense, to fully
embrace a chaordic way of being, even or especially when the organization is
Too many EA teams, when just starting up (yes, that is still happening),
think that the first order of business is to conduct an inventory assessment and drive
to consistency and a standardized, homogenous IT landscape. They think that
understanding the business is something an EA team should only tackle once it
has "climbed the maturity curve" -- meaning, once it has driven out chaos and
rationalized all of IT. Sure, chaordic means where it is important to create
consistency, that is done. For example, where critical relationships are enabled
through consistency in the technology firmament. But trying to standardize
everything, regardless of where underlying diversity in the business thrives on
technological diversity, is a recipe for creating a lot of resistance to EA with
the net effect that "maturity" seems an impossible goal. We need to undo the
association between "maturity" and rationalized, precision seeking
organizations, at least as far as IT and product development is concerned.
Right... I need to read the updated GAO Guide, don't I?! (Organizational
Transformation: A Framework for Assessing Improving Enterprise Archtiecture
Management. Vers. 2.0) Given a scan (it's 1:30am; that's all I can allow at
just this moment), one can fit chaordic within this
maturity framework, but the whole notion, I think, conveys a message of moving
towards more planning and rigor, more de rigueur... Fine. But we need to think
about how we communicate that maturity should be viewed in context. An
organization may embrace very ad hoc/on-the-fly, highly experimental and
laissez-faire approaches in parts of the business where that is the adaptive
response to uncertainty (inherent in "revolutionary" innovation) and hence be
more mature than one that insists on a leveled process and technology landscape
throughout. The framework doesn't disallow this, I don't think (given a cursory
scan). But does it encourage it? As I said, in my experience, too many teams
think that "as is" equals inventory and "to be" equals standards and
integration. Without regard for differences across the business (which may be
artificial artifacts of idiosyncratic choices, or deep differences in business
domains driving these differences in the technology landscape) or where the
businesses are headed.
When we go off on the wrong tack on what EA is, then build in a rigorous
management process to ensure its enaction, I have a hard time saying we're
reaching a higher maturity plateau...
2/11/11 We're Headed into Such a Different World!
From keyboard to Kinect to... mind control!
- Mind Out of Body: Controlling Machines with Thought, Scientific
American, Miguel A. L. Nicolelis, February 2, 2011
- Beyond Boundaries: The New Neuroscience of Connecting Brains with
Machines---and How It Will Change Our Lives, Miguel Nicolelis, 2011
- Can You Live Forever? Maybe Not--But You Can Have Fun Trying, Carl
Zimmer, December 22, 2010
- Mark Changizi on Humans, Version 3.0., Amira's notes, 2/23/11
- Telempathy: A future of socially networked neurons, Jens Clausen,
NewScientist, Feb 16, 2011
- The Future According to Schmidt: "Augmented Humanity," Integrated Into
Google, Kit Eaton, FastCompany, Jan 25, 2011
- [fiction] World Wide Mind: The coming integration of humanity, machines,
and the internet by Michael Chorost, 201
From personal computers to ... personal satellites!
Yet some things take a while to change!
2/12/11: Caveat: We subscribed to Scientific American for a while,
mainly because I wanted to have "stretch" ideas around us, to pique interest and
stimulate dinner conversations and so forth, just giving the kids a sense of
what their world could be like.
The notion that we might be able to dump our thoughts directly into
computers (BCI) is quite something. Edison, in his diary, noted a discussion he
had with his daughter and other lady guests and he mused that it is just as well
our thoughts are not transparently available to all. You may be astonished to
find that I withhold many, given how many I share... why just this morning I
first thought how much France does to support those who counter social injustice
in the world (including knighting Johnny Clegg). At that, a cup-half-full,
smart-alecky voice in my mind interposed with "Just as people distract
themselves from their own flaws by dwelling on those of others, so too does
France and the United States, pre-eminently." Well, fortunately that's not a
much heard voice in my mind. A kinder voice hurried in the next thought: "We all
have to contend with being flawed. The important thing is to have a social
conscience, and France and the United States do well to play that role for the
world. We are then most called upon to live by it."
Even so, I withhold many (thoughts; I'm less successful with judgments,
though I strive there too). Why just a minute ago... I was undressing this idea.
Just teasing. But that's the point, isn't it? The easier it becomes to pout (oh
what a lovely Freudian/finger slip that is! I meant pour but pout is too perfect!)
thoughts into computers, the more of them there'll be cluttering up the place.
Imagine -- more wordy than this journal! Incomprehensible! And yet. How
incomprehensible was a mechanized "brooding hen" before electricity? Edison's
diary is so wonderful -- it is too bad he didn't keep it longer (than just a few
days)... Was it because writing it took too much of his day? The part he wanted also to
give to journaling his work thoughts? Think what it'll be like, to have thoughts
captured if only we want to?
And if we don't... There are even rockier moral seas ahead!
But first we have to survive this moral crisis. This crisis of consumption
and waste. There are signs of promise. Too little too late? Last March I quoted
this, but it bears repeating:
"Necessity may well be the mother of
invention. But if we continue to manufacture mountains of toxic stuff, invention
may soon become the mother of necessity." Marty Neumeier, The
Designful Company, 2009
Moral seas. When I was a teenager going to a convent this notion kept me on
the straight and narrow: the nuns who taught me, once dead, would be freed from
time, and their spirits would be able to come back in time to my present and
watch all I did -- and maybe even all I thought. Wikileaks had nothing on my
imagination! And saints I didn't know were far less inhibiting than nuns I did
know! :-) At this juncture, we are judged by our observed outward acts -- and I
feel like a transparent hypocrite driving an SUV (bought several years ago,
before we acquired this mounting awareness; in the summer it's great for carting
kayaking/camping gear to departure points, but in the winter it sure is
overkill). As our expectations around privacy get reshaped, and values shift,
the questions and concerns of "good" and "evil" and also of policing society to
make it safe for those who strive-falter-strive to be good, will shift too.
It's the in-between times that concern me. The times when we're sorting out
how to deal with what we unleash on the world. Like this in-between time, when
we consume so much and hang so much of our aspiration and self-concept on
consumption and hurt this remarkable -- and surely in all the universe rare --
emerald and azure planet, impeding our own abilities as the human race to
thrive, perhaps even survive, on it!
This is not unprecedented. We have visited mass destruction on peoples
before. But with each advance, our capacity to wreak devastation and to break
bodies and hearts just increases.
[If one is to write on a Saturday, it should be tragicomedy, shouldn't it?
Smart-aleck! Voices! Mind #1, you too #2! And the rest of you!
Wait! How is it, do you think, that Edison got so much thinking done with
only two voices? Omw!! That's it! Only two voices to attend to! I'm going to
In the November/December 2010 issue of CrossTalk, in an interview, Grady Booch said:
"What makes it most difficult
to move from vision to execution is something that swirls around the
problems of design and the problems of organization. How do I best
architect a system? How do I best architect my organization to
deliver that system? As it turns out, there’s this wonderful,
delicious cusp of the technical and the social, and that’s where the
sweet spot for delivery is in education. How does one attend to the
fiercely technical problems, but at the same time be cognitive of
the social issues as well? I swear there are days that I go into an
organization where I’ll show up as über geek and other days I have
to show up as Dr. Phil, slapping faces around, saying, “My God, what
are you thinking?” So, in terms of where I think things need to
go—well, for people delivering software-intensive systems, I think
our education system has to attend to that dance between the
technical and the social."
Architectural Leadership Workshop anyone? We have really led the industry in
working at "this wonderful, delicious cusp
of the technical and the social" have we not? Martin Griss was very aware
of the people issues in reuse/product family/platform development, and I was
fortunate to work on projects shaped by him when I joined HP Labs. Then I had
the great privilege of working with Derek Coleman on the Team Fusion team, and
again the people/team side (along with architecture and Evo) was so central as
to be part of our identity. All of which laid the groundwork of values and
mindset for our architecture work inside HP in the 90's and then Bredemeyer
Consulting's work from the late 90's. Well, Dana came at this from a different
tack, namely that of software architect in HP's Operating Systems Lab -- one who
had been drawn to systems design even while an undergrad in CS, working with the
likes of Buckminster Fuller and Russ Ackoff one Summer, and taking a class from
Douglas Hofstadter in the timeframe that he was completing Gödel, Escher,
Bach -- so Dana was one of the students who got a draft copy and reviewed it
in class! (The only thing more valuable than Dana Bredemeyer's marked up draft
would be Hofstadter's own, right?)
Software visualization is an interesting special case of visualization, and
the broader work in visualization, I think, has much to offer our field. Dana
pointed me to David McCandless wonderful TED talk on The beauty of data visualization. He makes a neat "visual and verbal
channel" point, which is worth thinking about in modeling as well as presentation
says, referring to research by Tor Nørretranders (a screenshot of the image
McCandless was talking about shown on the right):
"Your sense of sight is the
fastest. It has the same bandwidth as a computer network. Then you have touch, which is about the speed of a USB key. And then
you have hearing and smell, which has the throughput of a hard disk. And then
you have poor, old taste, which is like barely the throughput of a pocket
calculator. And that little square in the corner, 0.7 percent, that’s the amount
we’re actually aware of. So a lot of your vision — the bulk of it is visual, and
it’s pouring in. It’s unconscious. And the eye is exquisitely sensitive to
patterns in variations in color, shape and pattern. It loves them, and it calls
them beautiful. It’s the language of the eye."
He uses a lovely visualization to illustrate that point about things the
eye calls beautiful! And then (I love this):
"And if you combine that
language of the eye with the language of the mind, which is about
words and numbers and concepts, you start speaking two languages
simultaneously, each enhancing the other. So, you have the eye, and
then you drop in the concepts. And that whole thing — it’s two
languages both working at the same time."
-- David McCandless, The beauty of data visualization, TED Jul 2010
I expect McCandless means, by language of the eye, not just the mind's
processing of the seen (what is out there) but the envisaged -- that is, the
language of the mind's eye. I like and use Feynman's story from his
childhood, when he realized one also thinks in pictures:
Image of Feynman as a boy from No ordinary genius: the illustrated Richard
Feynman. Quote is from Engineering and the mind's eye, by Eugene S. Ferguson.
We have this interplay going on in our minds between visual and verbal, and
when we add a visual dimension to a concept, we increase the power of our
cognitive and communicative tools. When we express a set of concepts visually,
we leverage the huge visual power that just works for us much of the time
without any conscious effort -- seeing relationships and patterns, anomalies,
"If you're navigating a dense
information jungle, coming across a visualization is a relief, it's
like coming across a clearing in the jungle." -- David McCandless, The beauty of data visualization, TED Jul 2010
On Friday I bought a set of history books that overlap with some we
already have, but did so because their raison d'être is to provide visual
maps of history, and I so loved that spatial contextualization of events. It
made things that hadn't, make sense. It drew me in, and provided markers to
anchor memory of events to. It is fun! The books also make other use of
visuals, with plenty of art and photographs, and the scenes of history are
much more alive in my mind with my visual sense activated (along with my
Anyway, McCandless goes on to make the point that we can use visualization to
alter perspectives/change views. He looks at data set one way (military budget)
and demonstrates the impact of the comparison of US spending to other nations
when seen visually. He then adds information (military budget relative to GDP)
to show how what we make of it shifts as we add information, again made vivid
through visualizations that make relationships (like relative size) pop out.
That richer picture can lead us to change our views, and McCandless quotes
Hans Rosling: "Let the dataset change your mindset."
[Actually, it was Florence Nightingale who is attributed with first clueing in on the power of, and using, visual
rhetoric.] McCandless goes on to point out that if it can change your mindset,
maybe it can change your behavior. He spent a month creating a visualization
that only filled 2 pages of his book! But, he observes, the resulting
visualization is a form of knowledge compression, that squeezes an enormous
amount of information into small space.
2/24/11: Ok, so the thoughts we're conscious of happen in terms of language
and images. Much goes on below that conscious level.
"But the brain does much more than
It inter-compares, it synthesizes, it analyzes
it generates abstractions
The brain has it's own language
For testing the structure and consistency of the world"
Sagan 'A Glorious Dawn' ft Stephen Hawking (Cosmos Remixed)♫ by John Boswell
And yet we wouldn't know our own thoughts but for them entering
consciousness, and the miracle of getting it out of our heads and making it
accessible to others to enable interaction and extension is through language
(and symbol) and images. Yep, that visual drum...
Via #codemap on Twitter:
In the software space:
2/14/11 Happy Valentine's Day!
2/14/11 Defy and So Lead Reason
In an earlier post I wrote (of Miyazaki and his anime movies): One who blends literature and
visualization in an exquisite art form that lets the imagination defy and so
I realized that that is what we do during early system conception. What's
that? Let imagination defy and so lead reason. It's that "what if?" and "why not?" phase.
There are many wonderful snippets of history -- including Kennedy's famous " I
dream of things that never were, and ask why not?"
and leading moments in Silicon Valley/Hewlett and Packard history) in Jim
March's leadership movie.
In the architect competency framework, the original sketch of which was
largely Dana's work, I added that "(stick)" and I have had push-back/questioning
on the intention there. I meant that it isn't good enough to just make
(architecturally significant) decisions, but we must do what it takes to ensure
the decision is enacted, giving stability and "ground under our feet" to move forward. I didn't
and don't mean to be inflexible and immutable, but if everything is constantly
shifting there is no basis for integrity.
Dana and I were talking about this (following up on some points a chief
architect made in discussions last week):
Decisions are a way of creating stability (hence predictability) in a context
(/in the presence) of uncertainty. Key architectural decisions enable us to move
forward. So we must make them, and you could say that the "last
responsible moment" rule is misleading (as a blanket rule) for architectural
decisions -- in the case of many architecturally significant decisions, we have
to make them at the earliest responsible moment! Well, there is a paradox
here -- the earliness/uncertainty and the need for stability. So we make
key decisions (put
stakes in the ground), and by making them and exposing them to the
system reasoning and review, we shake them about and stress-test them early, so
we gain confidence in them or change them -- early, when the cost of doing so is
less. That is, we make these decisions tentatively, deliberately setting out to
learn more -- as cheaply and quickly as possible, with pretendotypes or sketch
mockups and models and code prototypes and whatever fits the magnitude and
degree and consequence of the decision risk. We do this with a great degree of
imagination and creativity so we can move forward quickly, knowing what and
where our stakes are, and being poised to move them as we learn more.
We make the future (at least within the scope of what we touch) more predictable, by making decisions that form it. That
takes boldness, and requires experience. We apply our knowledge, experience,
intuitions from the past, and all the data we can afford to muster to illuminate
the present, including the forces and trends that signify what will be important
in the future, and we make decisions. We test those that need to be
explored and validated. And we socialize the decisions, so they are
followed, enacted, applied.
Socialized, because these decisions mean that we have excluded a whole lot of
other possible options in the broader decision/solution space, and we have to
convince others of the choice. There are lots of perspectives, and architecture
by nature is working to achieve strategic system-level outcomes that would not
be achieved if local concerns of some party with a vested interest (a
"stakeholder") was to dominate the solution and so skew the choice/outcome in
their favor. And socialized because moving into the future often means we're
doing something different than what was status quo for us. Decisions make a difference to what we
would do anyway; if not, we would have to seriously ask ourselves why we'd
make it a part of the architecture decision set, adding to the volume that
others need to grok and follow and govern. So by nature, architecture decisions
tend to be exactly those that need us to be motivating, persuading, influencing,
educating, coaching, and even governing. So that's the stick part. Stick,
as in sticky. As in followed. Adhered to.
But there's also the baseball bat. Well. Feather.
2/15/11: As different perspectives go: Here's Alistair Cockburn reading "Every
story has two sides" -- a witty and perceptive reminder of differences in
perception and perspective.
The "what if?" airplane design image below humorously illustrates the point of how a design would be
sub-optimized if the interests/perspective of one group holds sway over the
interests of the system as a whole. The image is on the Quintech blog,
though the source according to them is the TU Delft. (I got wind of it via
Daniel Store, whose manager introduced it to him.)
Image source: TU Delft
2/15/11 How About Watson?!
"No one was cheering for me,"
he says. "It was at their (IBM's) home arena. It was an away game
-- Ken Jennings, quoted in 'Jeopardy!' champs compete against computer, USA Today
(Dana and Ryan, as a combined team, might possibly have beaten Watson.
They're good! But isn't that a more fair test? Combined results across larger
teams? I'd love to see Watson's architecture including the problem solving
2/15/11 Our Hearts Cry
Our community is reeling with the news of Jigme Norbu's death in a
traffic accident last evening. He is the nephew of the Dalai Lama and was doing
a 300-mile walk in Florida to raise awareness for the movement for "meaningful
autonomy" for Tibet. Our hearts cry so for his family.
Valentine's Day will always be a sad day for the Norbu family. And a reminder
to all of us that Jigme Norbu lost his life on Valentine's Day in a peaceful
walk to raise awareness for Tibet's plea.
Image source: Zen Wires
2/23/11: Jigme's brother and two of his sons will complete the walk.
software architecture, we have two broad sets of forces -- designing to meet the
purpose of the system (a functional slant) in its use context(s), and designing
to meet structural needs (often framed up as qualities like scalability; they
are the qualities that the business cares about, and users too but generally
only indirectly, in so far as they bleed through and impact user experience).
The pressures of delivering functionality for and in the moment skews the
attention so that we run into structural issues (scalability, evolvability, even
predictability of function/proneness to errors). In such a world, very talented
experienced people who can "save the day" sorting out how to eek the next
stretch of life out of the system are sought after and rewarded. Often with the
title of architect. (Ghostbusters anyone?) Ignoring the interaction between
design of form to serve function and design focused on structural form to meet
points of structural strain or tension or demand, gets us into this situation.
In other words, ignoring the need to shape the form that serves function with
structural demands in mind, runs us into a wall of "technical debt" that grows
ever more menacing.
The "what if?" airplane cartoon I "quoted" yesterday (above),
makes the point that if we focus on one area of functional or structural concern
in isolation, we get a system that is ridiculous and dysfunctionally
out-of-balance. I have often recommended to architects that they take one
(major/area of) concern, like security, and focus their design on accomplishing
the critical (defining) functionality and just that concern and the related
properties or system qualities (teasing out the concern). And then take another,
and focus on that in isolation. Well, of course this is a ridiculous go-nowhere
activity, especially when we're already fighting off the "BDUF curse." So why? The motivation is to find the ideas that surface
when we do this, so that we have more grist for our "and" integrative (making
connections, innovative) thinking. It frees us, and this freedom lets us find
ideas we aren't open to when we focus down in on finding a solution that
balances across a set of concerns. In other words, it is just a
brainstorming/ideating aid. And we need those, because human brains like to take
short-cuts and these preclude all kinds of interesting, good ideas. We can't do
too much, or we'd never get done, but it helps to do some of this in
quick-and-dirty, on-the-fly small team model-storming (as I've come to call it)
just to loosen up our frames of reference and get more inventive, and have more
ideas in circulation.
Decisions in VAP
I read JD Meier's discussion of Reference Architecture (under the section Reference Architecture Examples)
and it seemed like you could drop the word "Reference" and you'd be describing
the initial creation of the architecture for an application or system. I think the wording needs to shift (to accommodate
my small mind) to make it clear that when you have a Reference Architecture
you're starting not with a blank whiteboard (figuratively speaking), but with
already (more or less) populated architecture views, and working from that. He says:
"The beauty of the reference
architecture is that you can shape the application before you
You could equally say:
The beauty of the architecture
is that you can shape the application before you implement it.
To distinguish the two more clearly, one could say:
The beauty of the reference
architecture is that some of the shaping decisions have already been thought
through (and presumably proven in other implementations), and you can leverage
this to move more quickly and with greater confidence. When you start to
whiteboard your architecture, you have candidate models to start with, improve
upon, and specialize to meet the needs of your specific application. etc.
Or something like that. The key being to differentiate the value of a
reference architecture from the value of an architecture.
A reference architecture is simply a prototypical architecture for a
space of applications (or products or systems). It is more than just a "straw
man," having some authority. It provides guidance, and may be used more prescriptively,
or more suggestively. For example, in a more prescriptive case, it would be the starting
point for application architectures in that domain and deviations from it would need to
be motivated; it may be specialized to the specific application, sketchy areas
refined and elaborated, decisions added, etc, but it should not be ignored or
altered without justification. Or a
Reference Architecture may be referred to simply for guidance, so the
architecture team has a reference point from which to draw ideas and they
informally, to themselves, justify deviations as a matter of diligence and
education. In that case, it offers guidance that will tend to lead to
somewhat more convergence and consistency, but doesn't prescribe or mandate this.
A reference architecture is created by a
broader scoped group like an industry architecture body (or a vendor) or the enterprise
architects or portfolio architects for an organization, with the intention that
it creates a "leg-up" to be used if not as a starting point for architectures
in that domain, then at least a point of reference. It is a way of sharing
architectural know-how, consolidating some of the lessons learned architecting
similar systems. And it is a way of bringing about more consistency in approach
and across systems, with a view to achieving other goals like ease of
integration or more mobility of developers across applications (lowering the
learning curve). Etc.
I've had architects, chartered by their manager to create a "reference
architecture," ask me what they should create. They didn't want to ask the manager
what he means (even though there is quite some spread in how the term is used), because then both he and they
would look a little at a loss. So I asked
what would be useful for their application architects to have done to assist
them as they design and evolve (most) applications under the rubric of the
reference architecture, and what would be useful for them as domain
architects (in the service of business strategies, goals and concerns) to work towards making common
elements, shared services, common patterns and approaches, etc.). I suggested they do the "one-block pedestrian mall." To actually provide
high-leverage value in a small but meaningful chunk
of architectural work that they'd do very quickly and put a flag on and call Reference Architecture
Draft 0.1. That's "right hand" tactically oriented work. Good, but use it to
also occasion figuring out what the more strategic contribution needs to be;
that left-hand work. In other words, if no-one knows what the thing means but
think they (ought to) want it, use that to your advantage to probe and define it to be something useful to your stakeholders and to you
working at your architectural scope. It's like you've been handed a stretch of
rope and you can tie yourself in knots with it or use it to bootstrap something
useful. So, something useful? Well, that's context dependent, but chances are you'll decide to start with a
handful of principles and/or a conceptual model (something like the reference architecture used for the pet shop
application). Be guided by what your organization needs to make
consistent and shared (wrapping architectural/design know-how into guidance,
decisions, implementations) across your architectural scope
(the projects under the "umbrella" or decision arc/envelope of your architecture).
What would really make a substantive difference to users, to developers, to the business? We're
trading off enabling something with constraining that and other things. So we
seek to be minimal and to show value.
A platform architecture (in this case, the architecture of a product family
or line, not an OS architecture) may perhaps be viewed as a special case
of a reference architecture (one where the scope of application and
specialization is specific to products in the family/product line), but it is
generally prescriptive, and its decision set may reach farther (for example,
including blueprints/specifications for shared components). (The minimalist
principle applies here too.). It will often come "bundled" with platform
artifacts that embed the architecture (for example, in a "framework," including
designated infrastructure, implemented shared services, etc.).
2/16/11 Organizing Models
I was looking for a reference in past journal posts to Dana
saying "decisions create ground under our feet" and
on that hunt, came across this:
model that elevates data to information to wisdom/insight/inspiration and action
It occurs to me that is important -- the path from data to knowledge and
wise/informed action involves making useful connections, and organizing models
do a first level of pulling together and connecting data, presenting it in such
a way as to enable us to make a fresh set of higher level connections. From a
mighty wash of data, a veritable deluge, we see what to hone in on and what to
occlude. Without such strategies we're lost in data. Anyway, the "organizing
model" could be seen as a design for information visualization.
Then what we do, for example, with our Competitive Landscape Map, following
the leadership of David Sibbet/The Grove, is use the organizing model not just
to organize and communicate the information but also in interactive group
elicitation, directing attention (a form of structured brainstorming). The
design of the information presentation also guides and directs attention in
drawing out/on the underlying information.
Yesterday I spent some time looking at Tom Gilb's Impact Estimation Tables --
Tom tweeted out from a seminar Dana gave in Norway saying we're missing Impact
Estimation Tables, so I followed up. It's an exciting approach/addition to the
toolkit, as it is one of those organizing models/thinking tools that supports
reasoning across decisions and views. You can get an idea in Kai Gilb's online
Evo book, or buy Tom's Competitive Engineering book. Tom's books are the kind of thing that I
think ought to be cornerstone books in any software library. I mean the man
is a pioneer of iterative and incremental. I should say,
Tom's enthusiasm is both his own greatest asset and his undoing, because he can
come across as wanting a world of i dotting and t crossing when everything is
moving too fast for anything more than the sensation of sufficiency in i's and
t's... We shouldn't confuse advocating with insisting. And we (as a community,
and personally) do need to recognize the value he contributed
with his 1988 book introducing Evo (Principles of Software Engineering
Management). It is huge! So I've been reading around in his Competitive Engineering book (though I did read it in previous drafts, then
known as the pLanguage book), and I find it definitely warrants more of my
received business value for their money (bang for bucks), and the
programmers no longer wrote code; they created business value!!
Their work had suddenly got meaning, and they loved it."
-- Jens Egil
Evensen, Tom Gilb -
an example from Jens, May 11, 2008
Ryan pointed out a bumper sticker which said:
Dana saw another on the same car that said "honk if you
understand punctuated equilibrium." Sounds disruptive. ;-)
A drive-by lesson in innovation!
On the subject of innovation, this video (via a Tom Gilb tweet) is amazing: ☼Bloomberg
Game Changers: Steve Jobs
There was a quip in a video I watched recently (can't remember which it was),
that went along the lines of "anyone can innovate! What? I have a garage!" Or
something like that. Spunk. And an empty garage. Or a dorm room.
As for the democratization of innovation, this is interesting:
"What the team discovered, described in a paper that is under review for publication, was
that the amount of money individual consumers spent making and improving
products was more than twice as large as the amount spent by all British firms
combined on product research and development over a three-year period."
-- Innovation Far Removed From the Lab, Patricia Cohen, New York Times,
February 9, 2011
Like this, folk -- like this! (Hey, it's great comedy, if nothing else. But just
think where we can go with that!)
Isn't the Internet great? Like giving us access to this: Guy Steele's classic OOPSLA'98 presentation, Growing a Language, also on video!
(Thanks to Brian Foote's tweet, and Google videos!)
6/26/11: Interesting contrasts in wikipedia page on
punctuated equilibrium and An interview with Jerry Coyne author of The Mismeasure of Man on Stephen Jay Gould. Quotes from the interview:
"To people that knew him
he was a somewhat arrogant and blustering individual. But we all
know that jerks can produce magnificent work."
always felt that we evolutionary biologists are the most fortunate of all
scientists, because the whole purview of life is our study. On any given day,
I’ll be reading papers on molecular biology, on biogeography, on physiology, on
embryology, on the fossil record. It all rolls into the process of evolution.
There is always something exciting that comes up and I think it’s Gould and
Dawkins who best convey that excitement to the general reader."
The latter quote sounds like our field and its breadth of scope (systems of
all kinds to learn from, software applications in every field of human endeavor,
and all the fields that are brought in due to the human side of developing
socio-technical systems), and Grady Booch in conveying excitement, deep and
broad command, and excellence in rhetoric!
2/17/11 How to Settle Technical Debt -- Manager's Guide from Cutter
Cutter Consortium has a new guide titled How to Settle Your Technical Debt: A Manager's Guide, covering:
Introduction: Technical Debt -- A New Paradigm for Software Development by
Chapter 1: Modernizing the DeLorean System -- Comparing Actual and Predicted
Results of a Technical Debt Reduction Project by John Heintz.
Chapter 2: The Economics of Technical Debt by Stephen Chin, Erik Huddleston,
Walter Bodwell, and Israel Gat.
Chapter 3: Technical Debt -- Challenging the Metaphor by David Rooney.
Chapter 4: Manage Project Portfolios More Effectively by Including Software Debt
in the Decision Process by Brent Barton and Chris Sterling.
Chapter 5: The Risks of Acceptance Test Debt by Ken Pugh.
Chapter 6: Transformation Patterns for Curing the Human Causes of Technical Debt
by Jonathon Michael Golden.
Chapter 7: Infrastructure Debt -- Revisiting the Foundation by Andrew Clay
Chapter 8: Revolution in Software -- Using Technical Debt Techniques to Govern
the Software Development Process by Israel Gat.
Chapter 9: Technical Debt Assessment -- A Case of Simultaneous Improvements at
Three Levels by Israel Gat.
Chapter 10: Avoiding System Bankruptcy -- How to Pay Off Your Technical Debt by
2/22/11: Hear ye, hear ye: Technical debt as
comedy routine. :-)
engineering texts are uniformly amongst the most boring writings in
all of human literary output, we won't have a text for this course.
(The sole exception I've found is the first edition (2000) of Bernd
Bruegge and Allen Dutoit's Object-Oriented Software Engineering.)
-- Shriram Krishnamurthi, course overview, Software
System Design, Brown U., 2009 (and again in 2010)
A nudge. And yet... When it is our field that placed books and other
forms of pipelined formal publishing (not just books but news and entertainment,
magazines, music, movies, ...) under threat as a distribution form,
shouldn't our field be open to recommending other formats as sources? So,
shouldn't my journal be the replacement to stuffy texts? ;-)
"perspective is worth at least 80 IQ points" Alan Kay, by way of Jeff
and, in particular,
perspective [gained from reading this journal, for example,] is worth at
least 80 IQ points!
Uh, yes. I said [the extended part of] that. Well, my word carries some weight...
[with the family dog...]
Oh, don't worry! This will stay a quiet backwaters place! Remember, it bears the amulet of words. So ...very
...many ...words. No-one in his right mind would recommend so many words that...
just create perspective!
I know I contradict myself. It's very depressing that no-one recommends my
journal, but if they did, I'd likely have to put a lid on it again because I use
it to think. It is all very well to think "out loud" when the audience is
small and friendly -- the latter being easy to assume of people who find their
way here following their own interest and who read despite, or because of, the
word volume. Depressing? Well, because no-one recommends it, I feel like it must
be ...somewhat embarrassing, like no self-respecting software person would want
peers to know they sometimes drop by... So, paradox. If you can't handle that,
go look in the mirror and ask yourself if you really want to be an architect.
Paradox and ambiguity are cohorts of architecting, no matter how boldly we make
and assert decisions. Which reminds me... Dana says he thought of asking a group
of architects he was working with if they think their spouses are absolutely
rational all the time. And when they, incredulous at so daft a question, said
"of course not" he'd say "and you picked your him/her!"
His point being that even the person we most love and admire -- and chose --
isn't consistently rational, so why do we expect our managers and peers to be?!
...Uh, what's that? Your spouse is entirely rational? Oh. That's embarrassing.
I read the paragraph above to Dana to make sure he as ok with it (it is a family
joke that everyone has to watch what they tell me), and he responded by
recounting the followership story from The Tales of the Dervishes.
2/19/11: This courtesy of Ryan, from a Garfield comic (or so he tells me): "What's the difference between
a rock and a dog? About 3 IQ points!"
2/18/11 Leaders Tell Stories
So this looks like a book to consider: Tell to Win: Connect, Persuade, and Triumph with the Hidden Power of Storyby Peter Gruber. It is reviewed here. See also Art of
Purposeful Storytelling interview with Gruber, author of Tell to Win.
This is an interesting read: The Adaptive Function of Literature and the Other Arts, by Joseph (Joe)
Whether we're designing the system, or trying to grok a built system, (for
any but a trivial system) we can't hold it entirely in the conscious mind at
once. Dana put it this way: in any moment, the full system is incomprehensible.
To reason, to bring intentional thinking, experience and judgment to bear, we
needs must work on different facets, and interrelate the facets. We do the "threads
of reasoning" thinking across views and use views that "pivot" perspective
so that we can interrelate aspects of other views. Anyway, I was thrilled to see
Christoph Niemann's cartoon "various
14" because it so well speaks to interlocking views. ;-) Niemann is a genius and in my flat land where everyone is a hero, he is a giant of one! I mean, we're talking How to Please Elise (seq. 22) -- that's some visualization!
2/19/11 The "Elephants in the Agile Room"
Philippe Kruchten posted highlights from the 10 year agile celebration event put together by Alistair
Cockburn. The "elephants" reveal that some refreshingly candid and probing work
2/19/11 Twitter Architect's Secret Sauce
That got your attention. Evan Weaver's annotated list of distributed systems
papers is brilliant: Distributed Systems Primer and Distributed Systems Primer Update. The slides from his presentation at
QCon in 2009 are here: Improving Running Components at Twitter (useful pointers in the comments
2/19/11 Good Following is Acting Empowered
Dana recounted Idries Shah's "The Nature of Discipleship" from The Tales of the Dervishes(pg 146 of the book, but 148 / 225 gets you
to page 146).
The title of the story is misleading in this context. Essentially it is about good
followership being discerning what needs to be done and doing it without being
told. It is an awesome story/lesson for architects, because we are expected to
do things without being told -- and without being taught what things we should be
doing without being told. It has to do with sensing what needs to be done to
bring about the vision/mission. And
sensing what needs to be done has much to do with the non-technical, though doing what
needs to be done centers in good part on the technical stuff we're
though, comes of, in Dana's words, shifting our center of gravity from the
technical to the business, so we're thinking of technology in business value
terms (that sustainability mantra), rather than purely in technical problem
solving terms. And that leading up and out, to ensure that the technical stuff
gets the wherewithal and the stick-to-it that it needs, that takes time. And
odds are, no-one is going to tell you to go lead up and out, because they're
gong to think of that as turf encroachment. It's a tricky business, this sensing
what needs to be done, once you come out from under the political umbrella your
architect was holding over you, and become an architect. Hm, I have to draw
that, don't I? I did the architect raising the productivity ceiling of team image, but the architect holding an
umbrella over the team, protecting them from political... um... fallout would be
good too, don't you think?
If you're concerned about the title to this post, please remember that
following presumes a leader, not a controlling manager. (On a somewhat related
note, remember this Dilbert on taking
On Saturday evening, Daniel (who grew up in Romania) pointed to some Constantin Brâncuşi links, prompted by the Google logo.
Also on Saturday evening, Dana had me read "Fatima the Spinner and the
Tent" from Idries Shah's Tales of the Dervishes. That parallels Steve
Jobs commencement address
at Stanford. Essentially, the stories encourage us to "follow our Bliss,"
our passion and values, taking comfort and courage in knowing that later we will
look back and see how the dots connect.
On Sunday I got around to reading more about Brancusi on various art museum
sites and, of course, wikipedia. It was interesting, again, to see the connected
"Though just an
anatomical study, it foreshadowed the sculptor's later efforts to
reveal essence rather than merely copy outward appearance." -- wikipedia
are idiots who define my work as abstract; yet what they call abstract is what
is most realistic. What is real is not the appearance, but the idea, the essence
of things." -- Constantin Brâncuşi,
I thought that was interesting. Software architecture is like that, isn't it?
About finding and expressing the essential structure and nature of the thing. So
we struggle with how to talk about it -- abstraction, compression, ...? To me,
in advance (of the built system) it is more abstract (as in the mechanism of
neo-modern art) though by reference or allusion
(metaphor/analogy/symbol/imagery, ..., patterns) we draw in potentially huge
influence and experience so compression plays a very important role. Once built (at all, and as
the system evolves), compression (the very real, actualized meaning of whole
chunks of the system gets compressed into the elements and mechanisms we
represent) and abstraction (selectively eliding and occluding
detail to reveal the essential) figure. In any event (expressing design intent
or reflecting design as built), abstractions are central. I suppose, if I must,
I could dance on the head of a pin and say these abstractions (entities in
representational form) are compressions (they draw into compact form much
The neat thing is that whole fields of representation (including aesthetics
and semiotics) swoop into relevance. :-)
As for connected dots, we use that concept in architecture too. By
articulating our decision rationale, we provide the means to connect the dots
from intended to actualized system outcomes. We draw from the experience of our
own and our collective past (for example knowledge formalized in patterns, but
also which we access informally through conversations with peers), project a
future, and connect the dots from the future we desire to create to the past
that gives us connections and experience to boldly build on. Of course we
encounter surprises, we learn, we adjust our course and our conceived future.
And if we reconnect the dots, we create a story from which the next generation
of system evolution will learn and build on.
"There are only two or three human stories,
and they go on repeating themselves as fiercely as if they had never happened
before." -- Willa Cather
"Books serve to show a man that those original thoughts of his aren't very new at
all." -- Abraham Lincoln
"Everything has been said before, but since
no-one listens we have to keep going back and beginning all over again" -- Andre
2/24/11: This is interesting: Unlocking the Mysteries of The Artistic Mind, Psychology Today, July 01,
2/21/11 Status of Our Profession
We have, on the one hand, the Software Craftsmanship Manifesto, and on the
other, Capers-Jones perspective on large projects. A lot of good is happening in
our field -- if we just look around us we see all kinds of evidence of software
that works. Pretty well! Superbly in most cases! But we also have concerns. I
think that is healthy. A self-satisfied field would be stagnant, and ours is not
Want to make your architecture memorable? Try something (for example as the
lead in to your presentations) along the lines of what is described in this review of Christoph
Niemann's book (I need to look at the book, but the description conveys the
idea). You don't think I'm serious, do you? Let me at your architecture, and
I'll show you how!
Innovation games have nothing on this stuff! And they're good; I'm not
knocking them. I'm just saying there's a whole lot of scope for play that gets
real work done.
2/22/11 Software Architecture Workshop
I'm getting really excited about the new material going into our Software
Architecture Workshop. I decided to keep it to 4 days at least for now, so the
pace will have to pick up, but that's good too. The next open workshops are as
This is such an exciting time for architecting because so much is coming
together to boost our system design prowess -- and what a good thing that is,
because system complexity isn't going to tame meekly. No, we need audacious bold
ruffyans to lead us on this, now don't we? ;-)
Well... you could tell someone about the workshop. Oh, I know, who'd want to
take a workshop with me... Sigh. Well, Dana's teaching the one in The
Netherlands. He was in Europe this month and a group of architects he was
working with got so excited about the discussion they were having around their
system design they ran out of space on the notepad and one of the architects
kept going, drawing on the table to make his point visual -- to show what
he meant! Without noticing, he just kept going right off the page!! (I
warn folk not to draw off the paper onto the walls. That happened once, and I
was liable for the inked wall... but this was pencil, on a table...)
2/22/11 IBM Got Smart
IBM Rational's marketing has wised up to the "hand-rendered"/sketch animation
wave... a little ironic, but also really cool. See for example this one on system complexity (neat auto example -- 10M LOC!). No, I'm
teasing/kidding about it being ironic. Hand sketching and visual design tools
live well together. It's just that we people need to remember it's not a
one-or-the-other world. Quick and dirty sketching has its place. And tools have a place. (We need to evolve our designs, keep them current so that we
can continue to be intentional about design changes, new features, rolling
learning into the design and so forth, and tools are key to reducing the
encumbrance of doing this.) And slick sketch animations have a place. Hey, I
watched the ads (complexity,
predictability, leadfrogging); they're that cool!!!
I might just get addicted to Twitter, despite my misgivings... For example,
David Curran (@iamreddave) tweeted: Gödel, Escher, Bach the [MIT] video lectures. Right there that's sufficient
ROI on a week's worth of daily minutes glancing through the tweet stream!
Watching the videos will cost me more time, but the overview looks interesting.
I can see some great workouts in my future. ;-) (Last week we got out of doors
as the weather was just lovely with temps in the mid-60's, but it's winter
Brian Foote (@bigballofmud) tweeted "Endosymbiosis or mere Predation? RT @robwalling: What happens after Yahoo acquires you (by 37Signals)." I love Brian's
metaphoric allusions! Compressive genius. Anyway, it's an interesting read --
including (many of) the comments.
Other tweets alerted me to
Indeed, I have to say, this procrastination pays huge dividends in... much, much more
procrastination! Uh, I mean, educating myself. Yeah, that's it.
But none of the tweets said anything about flying cars. Hello? It's 2011.
Tale of Two Lenders, Bloomberg Business Week, Winter 2006
- Kiva, Porter Andserson, indexaward
- Kiva CEO Talks Trust, Women Entrepreneurs, and Oprah, savvysugar,
- Pop!Tech: Jessica Flannery explains Kiva, Ethan Zuckerman, 10/18/07
- various NextBillion
entries on kiva, including tidbits like Kiva moving into student loans
- Pop!Tech -
Interview With Kiva's Jessica Flannery, Robert Katx, 10/20/07
- in 40 under 40, Fortune
ProFounder (co-founder: Jessica Jackley)
"Pursue your passion.
Peel away the boundaries between you and the people you want to
work with. If you do that peeling, you can build connections
that change you and change the world. In the course of pursuing
passion and peeling away boundaries, you become vulnerable.
Don’t fight it. Strive for vulnerability – beautiful things can
happen out of it. In that same light, here’s my one-liner:
never, ever think you are better than anyone else. If you can
live like that, and work in the BOP context, then you can really
change things." -- Jessica Jackley Flannery, Pop!Tech - Interview With Kiva's Jessica Flannery,
Robert Katx, 10/20/07
Another one liner I liked was Premal Shah's "The relationship
trumps the issue." People are more
important than notions one is pushing.
The problem with being a woman ... shouldn't be one! The world is changing fast.
Notions of hierarchical organizations that arose to serve the industrial age are
crumbling. Not so fast, but hopefully to follow, are notions about power and
leadership. "Women's lib" liberated men to play a greater role in their family
lives and broke the straightjacket of convention when it comes to showing
emotions. Yes, it's a juggle balancing dual careers and aspirations, especially
when advancement opportunities open up elsewhere in the country or the world.
Yes, it's tough managing the load of kids and work demands. Yet with the demand
that both partners play a role, came much greater investment of men in the
nurturing relationships and playful side of family life. Now we need to liberate
our view of "authority" and allow that a person can lead without exhibiting
stereotypical "power" behaviors (like arrogance which is a way of "occupying
maximum space"). A person can lead by being
vulnerable, and inviting those closer connections that come of
vulnerability. And leading through conduits of connectedness. Dominance distances. Yes, it allows for "objectivity" when hard decisions have to be made
-- like killing people or projects. Let's have more vulnerability!! Let's
experiment our way to more right things, rather than doing wrong things more
pseudo-confidently and aggressively. Being determined, seeing clearly, getting
things done -- yes. None of that essentially requires power posturing. Vanilla is
not the only flavor in leadership, and network facilitative styles have an
important place. Our expectation set around behaviors that signal authority need
to open up so we accept that vulnerable and humble aren't incompatible with
leading and being an authoritative source of visionary where-to and how to get
there in a collaborative, community enriching way.
Oh well, I have my uses.
“In the end, we self-perceiving,
self-inventing, locked-in mirages are little miracles of self-reference.”
— Douglas R. Hofstadter, I Am a Strange
"Books serve to show a man that those original thoughts of his aren't very new at
all." -- Abraham Lincoln
I think that's the 3rd time I've quoted that. It really ought to be a lot,
For different cultural takes on The Sexual Paradox, look at the
difference in the covers of the
Martin Fowler's TradableQualityHypothesis post reminded me to reread my refactoring essaylet on the Bredemeyer site, which is a trim-down of my Refactoring and Lean post from March 2008. I know I stubbornly persist in
using "refactoring" assuming a natural interpretation (revising the
factoring) which is more general than Martin's definition in his book and on
the refactoring site which says:
"Refactoring is a disciplined
technique for restructuring an existing body of code, altering its
internal structure without changing its external behavior. Its heart
is a series of small behavior preserving transformations."
I do so because refactoring is important in
architecture too. I think we're smart enough to have context-sensitive
definitions, don't you?
Anyway, now that we've offset any rant about how I use the word refactoring...
Ok, let me say, I think the spirit of refactoring is that it doesn't take
away or add functionality, and that is the sense in which it doesn't change
behavior. We then open up the concept of "refactoring" to larger scale
restructuring, for example to allow more scalability, or to take advantage of
parallelization, so improving performance. Or to reduce coupling and create a
more balanced distribution of responsibilities (no "god" components), etc., so
that the system can be understood and modified with lower impedance. Moreover,
poor designs tend to increase proneness to errors (for example, through
decreased understandability). Caught, we call them bugs and we try to fix them.
Not caught, they leak through our defenses (and sometimes we knowingly let some/untold
out, just deciding to eat it because "best effort" in complex systems must
suffice) and become a cost of quality time bomb.
Cost of quality is an opaque term. It really refers the cost of unquality
-- the intangible costs in terms of customer perception and damage to brand
reputation, and so on, and the tangible costs of rework and patches, settling
claims, lost revenue due to service/production outages, etc. Disappointment and
frustration costs customers directly and indirectly which impacts customer
loyalty and word-of-finger (I can't bring myself to say word-of-mouth in this
digital age) advocacy, and translates into lost revenue. There's generally a lot
of (grudging) tolerance for software's glitchiness because we want the
enablement it gives. But best effort isn't good enough when a competitor far
surpasses others and tips the scales of expectation and desire. Designs
that otherwise delight are undone by an experience that is just encumbered by
shoddiness under-the-covers that bleeds through.
Image: David Heinemeier Hansson (DHH) Tweets, 2/25/11, demonstrating that
quality issues bleed through to impact user experience... (Apologies to the Xoom
team for using these to illustrate when I have no first hand experience with
Xoom and there are so many examples one could use. This was just near to hand.
The point is well illustrated though. An eager user of a complex/exciting
product will likely forgive the occasional glitch (so long as it only costs a
bit of frustration, not safety or big money), but there is a quality threshold
they expect and will hold a product to. Here's an alternative view on Xoom.)
2/25/11: Oh, wow, Brian Foote returned to blogging (about time!)... Great post on
original sin." Under agile, Refactoring became a phase in the
Reading Brian's post, it occurred to me that we shouldn't sweep the
refactoring that we do prompted by new functionality under the rug either. New
functionality can make it more clear that we need to restructure -- in
particular to factor responsibilities differently, to create more crisp
abstractions that separate concerns along somewhat different lines. It's a nice
idea to separate changing the structure (and associated testing to make sure we
keep extant functionality working) from adding functionality, but pragmatically new
functionality may be what brings the opportunity/need to light. So we have
refactoring as a "stage" in the "process" and we have refactoring as something
we just do... though if the impact is more diffuse, we need do it under the aegis of
the architect. This is where we return to recognition that the architect needs
to be conscious that he or she can get much good done by working at the level of
team culture, shaping and reinforcing values and principles. And as a leader who
spends non-trivial cycles leading up and out, educating and getting sponsorship
and advocacy from the management team to support a quality orientation that
isn't overly pedantic and rigid, but also has no truck with sloppiness that will
make project momentum harder and harder to sustain.
Image: Tweets from Bill Wake on 2/28/11
3/6/11: See also: The Impact of Accidental Complexity on Estimates, Jay Fields, February 23,
2011. (In The Art of Change: Fractal and Emergent,
I used "gratuitous complexity" for the complexity that is not inherent in the
problem, but added by technical debt.)
2/25/11 The Extended Human
Ryan delightedly recounted the scene from some after-school social
time in his classroom -- 4 kids were seated around a table all on their
computers and/or phones, texting and Facebook chatting with each other while they
were also talking to one another! That takes the multiple channels notion to a
whole new level! These kids are monsters of compute-enhanced living!
2/25/11 Common Sense versus Conventional Wisdom
A place holder. ;-) [I wanted to write a piece about situations where
(that uncommon) common sense runs counter to conventional wisdom.]
2/25/11 Point. Counterpoint.
Point: From Trash Cans to Nokia: Is Creativity Innovation?, by Steve Denning,
example of Apple [AAPL], they quote the Apple design team who say: “It’s all
bullshit and hot air created to sell consulting projects and to give insecure
managers a false sense of security. At Apple, we don’t waste our time asking
users, we build our brand through creating great products we believe people will
-- quoted in User-Led Innovation Can’t Create Breakthroughs, Steve Denning, Feb. 15 2011
Counterpoint: Tapping the Innovative Masses The creative power of technology users awaits
mining by companies, says Eric von Hippel, By David Talbot, Technology
Review, February 25, 2011
"Basically, nobody ever expected that
consumers innovate. It's not in economic theory. It's not in policymaking. The
traditional model that has been in place since 1934 [the economist Joseph A.
Schumpeter published The Theory of Economic Development that year] is that
producers are the innovators. Schumpeter even argued that producers, by what
they offer, create user needs. Because there was an assumption that producers
were innovators, nobody looked at individual consumers to see if they innovated.
Now that we've taken a look, we find out it's twice as large as producer
innovation in consumer categories."
-- Tapping the
Innovative Masses, David Talbot, Technology Review, February 25, 2011
I think there are some interesting points about user-led innovation. The
first is that innovation is democratized in the sense that individuals are using
their wherewithal to create incremental, and in some case quite revolutionary,
in cost or concept terms, advances on extant products and even invent new ones.
Ok, the beneficiaries of each innovation may be limited to the user or a small
circle, and we could quibble and say that it is only an innovative creation, not
an innovation, unless significant economic value accrues to customers or the
producer -- that is if the creation is leveraged into economic value at scale.
The point is that on aggregate the phenomenon becomes economically very
interesting -- individuals are winning the battle for a piece of their own
wallet with their homebrew innovations and are doing so in large numbers. Do we
need to rethink the economists definition of "innovation" when vast numbers of
new "products" with incremental or revolutionary new features, applications or
cost structures are being created by users? Of course, what we would include for
consideration here is a slippery slope... but if we'd consider it an innovation
coming out of a company shouldn't we consider it an innovation if created by an
The second is that these innovative creations represent a source of
innovative designs that producers may capitalize and leverage. They are already
prototyped! Yes, sure there are more ideas than resources to fund them all, and
tough "bets" have to be placed. And some ideas will be before their time. Others
may have viable market of one, and that's now filled. That said, when a user can
hack a solution for a fraction of the cost of one on the market, for example,
there's some merit to taking note. Yes, industry incumbents have lots of
ideas too. The point is that the big idea that will delight users may come from
a innovative user who is working in advance of research labs. More grass-roots
funding options are emerging. Great. Industry incumbents should be looking at
these too, not just "crowd-sourced" angel investors...
One thing that the "Apple design team" was missing, is that they are
designing consumer products -- that is, products they themselves use and
understand. So they are asking (would be) users. Themselves! That is very
different than creating a system for oil exploration from an office building in
Europe or the US. The experts who understand the (geo)physics and such are in
the building. The users may be in a ship. So there are diverse sources of
expertise that need to be brought to bear. The key is that the architect and
others on the design team are interpreting and imagineering the concept and
design based on many sources. Users being one. Their own ingenuity being
As point-counterpoint conversations go, I was thinking, reading around the
Nokia saga yesterday, that it might be used to argue against product platforms
and product proliferation, when indeed the company has led the market on revenue
and volume -- just not in smartphones. I gather the critical issue was not the
platforms and variation, but bureaucracy and local (divisional) vested interests
that led to not simultaneously moving aggressively enough to
cannibalize it's own product base with an exciting smartphone and extended
ecosystem creating/supporting relationship platform. Sure, these are related
issues. But the recent travails only point to the need to ... bugles please ...
read my The Art of Change: Fractal and Emergent paper. Just kidding. Though it is relevant and great and all that. Maybe. Well,
you decide if it adds anything to Clayton Christensen's Innovator's Dilemma.
I think it does. Maybe. Oh, you decide. ;-)
Co-creation Primer, Stefan Stern, HBR: The Conversation, February 28,
2/25/11 Note to self
I watched half of this before going to the peace play at Ryan's school (for
which Ryan created and ran the sound effects): Brownfield
Software - Industrial Waste or Business Fertilizer?, presented by Josh Graham, Feb 25, 2011. I'll try to get back to it tomorrow...
2/25/11 Design Beyond
(After the play) I read Richard Gabriel's "Design
Beyond Human Abilities" essay from 2007. As he hoped, it is a
very provocative essay, and I do very much like that in an essay. It doesn't
leave one thinking the world is all solved and done, but rather opens up
questions, posits thought-provoking ideas and brings fresh concepts and
analogies into play
I'll be using the "canalization" word as it well describes what I have been
using many more words to attempt to articulate! When I explain why the waterfall
trap is "being wrong righter" and agile can be prone to "being wrong faster," I
can now just say: these approaches canalize too early. See how quick that was?
Well, you need to know what canalize is; if not, read the second to last
paragraph on pg. 31. Yeah, we're still going to canalize. It's actually
important if we're going to bring experience, analogy, and experiment into focus
to produce something in competitive timeframes. So we do some concept
design upfront, to explore ...um... diverse (figurative) "canal choices" just
enough to know which we're hot to invest our passions in, not to mention our
businesses resources and potentially one shot at success in. Oh, now you want to
duke it out, putting on the "iterative and incremental" boxing gloves in defense
of agile? Well, this is the point of canalizing -- once we head in a direction,
it becomes self-reinforcing. Expectations get shaped around what we have done so
far. Feedback quickly starts to focus on what we did put out, and what to add to
it. Systems shape the world around them. So we get canalled into small delta
adjustments on the main flow.
Oops. Well, it's a nice idea that having just that perfect word would save
As much as the essay excites dust off some ideas that want their moment to
shine, it's late and tomorrow's set to be yet another busy, over-committed day.
The Mercedes Benz left brain right brain ad series (code
- painting; numbers - music; chess/structural design - free spirit/passion/fun) sure is stunning! You
don't need to be reminded about my Making It Visual presentation (it's pretty naked, in the Garr Reynolds sense, of course), but what I intended to say is here. I noted:
The right brain/left brain
allusions are not intended to be taken as unquestioned and
unquestionable fact but rather as a metaphor. This is a research
frontier that is interesting and the intended take-away is that we
need to allow that we use our detail-oriented, pragmatic, logical
"left brain" functions much of the time. And while that "left brain"
self is a bit suspicious of the "right brain" self (like it is
wearing a tie-dyed t-shirt that is an adopted vestige of the 60's),
there are times when we need to be creative, think big-picture and
holistically, and be comfortable with ambiguity and uncertainty.
Anyway, this picture helps remind us that our understanding of the
brain has come a long way, and has a long way to go. And that does
nothing to undermine the point--our education system and our reward
system helps to shape our experience, strengths and self concept,
and we have to take courage and invest some of our cycles in the set
of "functions" that have been associated (correctly, or not) with
the "right brain." Walt Disney recognized this, and purportedly
created separate physical spaces to play out the "dreamer," the
"realist" and the "critic"--separating in space and time the "right"
brain functions from the "left," to ensure that the right brain was
not dominated into too early submission to practicality and logic.
And, Disney, as Randy Pausch taught us, is a software engineer's
-- moi, PICTURE IT: The Art of Drawing People In, 6/2009
See I used the "holistic" word. You knew you shouldn't be reading
here. Now you're going to have some of that ucky right brain stuff stuck on you
2/26/11 Persuasion and Other Arts...
Jim "Cope" Coplien's initial few "agile career" blog
posts have, by virtue of
being piped to IEEE (Software) members, been exposing a broad audience to the
"whole person" and "art in engineering" conversation. It's a tough conversation
to have, so hats off to him for going there. But. Wink. You knew that was
coming. But nothing. Ha! AND I think we need to be careful to include in the
conversation observations in the vein of a Feynman or draw on the likes of a da
Vinci who are as much engineers' heroes as artists'. Or to do something like
"Smalltalk development enticed
one to "go native", like Gauguin's Tahiti."
-- Brian Foote, Refactoring's Original Sin: Part I
-- complete with the image, to demonstrate his point, thereby illustrating my point
that art informs the views and enriches the illustrations of scientists.
That said, the core
disciplines of software engineering and computer science are exciting for the
challenge and the reward of solving puzzles that demand ingenuity and, for those
who hold themselves to a high standard, keen aesthetic sensibility. And so we
find, in the work of the talented, beauty in software -- in the code, though Gourse makes even commits look
lovely. Beauty is, after all, something we individually perceive and where we
find it, our mind-spirit inclines to it in awed appreciation. It appeals to our
aesthetic sense but there is also a component of admiration for the mastery
whether it be a work of creation or Creation (use your preferred interpretation
for Creation, whether evolution, or Evolution with a Guiding Hand, or Creation
Tonight we went to an IU Music senior's percussion recital (marimba mostly, but also a bembe drum
and vocal piece, and culminating with a Joplin ensemble arranged for
marimba and xylophone by Ingrid Cheung. Ingrid impressed us with her phenomenal
performance!) Anyway, as I was listening to the music and see-feeling what it
conveyed to me (the Northern Lights piece for example). The thought struck me
that what we create has different dimensions and it is in all that it brings together for us that we find its
meaning. When we write software, or music, or poetry, we aren't thinking 'oh, we
are creating art," we are just intent on expressing what propels us. Our
fine-tuned aesthetic sense is working in the background. The key, though, is
developing that sense. And we do it by mastering our medium. Yes. But also by
developing our sense of what is meaningful and to be valued. This has to do with
our sense of right and wrong, but also with our sense of what is good in and for
the community we care about. So we develop our values and our aesthetic
sensibility. And our values thread, or are implicit in, the
meaning we create in essential form in our work... (hey, between the recital and
now, I'm winging this... two later than the usual late nights in a row...
for all I know this is absolute nonsense but it tumbles out through my miscreant
fingers that should by rights be getting some sleep...) the sense that I
am trying to articulate is that yes, we can build something meaningful or
useful, but our expression can create meaning for our community. For example,
that we can demonstrably create code we recognize as beautiful, gives a "software
craftsman" a sense of greater aspirational purpose than writing just sufficient
utilitarian code. What we mean by beautiful has a lot to do with the context and
So it was interesting to return from that recital, to stumble by that
Bliss-following Serendipity-blessed path on this:
"My starting point is always a
feeling of partisanship, a sense of injustice. When I sit down to
write a book, I do not say to myself, ‘I am going to produce a work
of art’. I write it because there is some lie that I want to expose,
some fact to which I want to draw attention, and my initial concern
is to get a hearing. But I could not do the work of writing a book,
or even a long magazine article, if it were not also an aesthetic
experience." -- George Orwell, Why I
Late...let me just say what triggered this exploration in the witching hours
-- an article in
The Economist that ... Well, I thought this was worth drawing out, at
least for a smile:
"Studying the arts can also
help companies learn how to manage bright people. Rob Goffee and
Gareth Jones of the London Business School point out that today’s
most productive companies are dominated by what they call “clevers”,
who are the devil to manage. They hate being told what to do by
managers, whom they regard as dullards. They refuse to submit to
performance reviews. In short, they are prima donnas. The arts world
has centuries of experience in managing such difficult people.
Publishers coax books out of tardy authors. Directors persuade
actresses to lock lips with actors they hate. Their tips might be
-- The art of management: Business
has much to learn from the arts, The Economist, Feb 17th 2011
The real value in the article (for me), was the pointer to George Orwell's Why I Write essay! It is one of those pieces of writing that jiggles one's mental models
into new configurations. I love that!
"All writers are vain, selfish,
and lazy, and at the very bottom of their motives there lies a
mystery. Writing a book is a horrible, exhausting struggle, like a
long bout of some painful illness. One would never undertake such a
thing if one were not driven on by some demon whom one can neither
resist nor understand. For all one knows that demon is simply the
same instinct that makes a baby squall for attention." -- George Orwell, Why I
raise new questions, new possibilities, to regard old problems from
a new angle requires a creative imagination and marks the real
advances in science." -- Albert Einstein
2/27/11... the month is drawing quickly to a close... with so much to do.
2/28/11 Refactoring continued
Cool, Brian followed up with Refactoring's Original Sin: Part II.
"Lehman and Belady had observed that while
design was an entropy decreasing process, maintenance was an entropy increasing
process, that inevitable eroded structure." -- Brian Foote, Refactoring's Original Sin: Part II
2/28/11 Where's the Design Thinking?
I read Bob Martin's "The
Land that Scrum Forgot" article from December 2010 (via Brian Foote's post
on Double-Entry Accounting). This is an excerpt:
"One of the reasons for
this hyper-productivity is the smallness of the code base. Small
code bases are easy to manage. Changes are easy to make; and new
features are easy to add.
But the code is growing fast; and when a code base gets
large, it can be very difficult to maintain. Programmers can be significantly
slowed down by bad code. Teams can be reduced to near immobility by the sheer
weight of a badly written system If care is not taken soon, the hyper-productive
scrum team will succumb to the disease that kills so many software projects.
They will have made a mess.
reason scrum teams make messes is because
they have been empowered and incented to
make one. And a Scrum team can make a mess
really, really fast! A Scrum team is hyper-productive at making messes.
Before you know it the mess will be “so big
and so deep and so tall, you can not clean
it up. There is no way at all.”
when that happens, productivity declines.
Morale goes down. Customers and managers
get angry. Life is bad."
-- Robert C. Martin, "The
Land that Scrum Forgot", 12/14/10
Highlighting the issues and the recommendations Uncle Bob makes in this essay are
important. Lots of good advice and pointers to tools.
Yet... no mention of architecture and design as intentional and reflective
activity? Improving low level design through Refactoring and TDD is a big deal. But there are other design tools too. The system grows and we think we can
hold it all in our heads and manipulate and reason about an increasingly not
just big, but complex (many interacting parts, explosion of state space,
(ostensibly) vast degrees of freedom and malleability, etc.) system. Why? I mean
what are we gaining encouraging (or not discouraging) an anti-design mindset?? This isn't
(well, shouldn't be) a warring among factions. This has to do with software people we
care about getting burned on projects that get locked in hairballs.
Why do we spurn intentional, explicit design? Patterns say that knowledge we have
garnered in our field applies to future systems. And yet we somehow don't want
to say that we can apply knowledge we have built in ourselves and our field to
the design of new systems or the improvement of extant ones? In the 90's there
were projects that got mired in analysis paralysis from zealously overdoing
modeling. We labeled the disease BDUF, giving antibodies more potency with a
named thing to attack. Out with modeling, the Agile Manifesto implied, at
least to many in our field.
Enter the last decade. The Age of Assurgent Agile. And again... bitten by
zeal. There've been enough projects mired in code rot from glomming code
together at high (at first) project velocity to trigger the Craftsmanship
Manifesto. And a list of recommendations to measure and test our way to
quality... yes, if we do this we will tend to write cleaner code, and we will
learn how to. But that is worth saying! Work intentionally. And incent that
intentionality by making it visible through measures and tests when it is being
compromised by pressure-cooker management.
We practice extremes, get aberrant results, and recoil! But the tough
lesson should be that extreme all-out-to-the-exclusion-of-other-good-practice is
where the issue lies. It is simply not pragmatic. Pragmatic takes what helps, in the
degree of moderation that is helpful.
If we want to resist entropy, architecture needs to be a front-and-center
concern not just in a handful of early sketches to launch teams, but through the
creation and evolution of the system. We act intentionally, and we reflect on
what intention and accident brings about, apply reason and experience and evolve
the architecture. So we get this blend of intentional and emergent, experimental
and reflective, reason and accident, etc.
Yes we don't know everything upfront and we'll learn as we progress. So? Does
that really mean we should disregard both experience and reasoning with other
tools than code to aid our thinking? Don't we instead want to figure out as
cheaply and quickly as we can where our make-or-break uncertainties lie? Does
experience and expertise count for anything in our field? And if it does,
doesn't that mean we should make key enabling decisions as early as possible,
creating the space within which we can productively move forward? Yeah,
sometimes we'll be wrong, but we apply a discipline of discovery so that we find
out as early as anyone reasonably could!
3/1/11: This is not about all or nothing. It is not about devaluing the design that
happens in the medium of code (and tests). It is about bringing our various
resources to bear -- system design thinking, expertise, modeling, patterns,
experiment, incremental and iterative architectural designing/testing/improving and incremental and iterative designing/coding/testing/improving. The
software we create is an important enabler for its users and the code we
write impacts the quality of life of those who evolve it. From both points of
view, we need to take a multi-faceted approach to creating great systems. These
grow ever more complex as we expect more and more from the enablement our
compute-rich systems afford. We have to bring more intellectual resources to
bear so that we can husband that complexity! This includes working
explicitly at the system design level, on design of key mechanisms and on
code. Early, for example, we need to allow more expansive search patterns before
we converge into a "canal" that pulls us along a course that is in important
ways predicated by the choices we've made.
This post provides an interesting anecdotal counter-point, demonstrating that
the even though our code hygienists may not intend to subvert design, their
language causes developers to think explicit design unwarranted (thinking that
between tests and our refactoring tool of choice, design will emerge): Object oriented abstractions. I came across that post via Tim Ottinger who tweeted:
Yes, if the gurus of our field don't talk about design, developers will
continue to discover the need for design the hard way. This may be one of those
unintended consequences things, where focus on a really important problem isn't
meant to imply rejection of other important practices. But our communication is the effect we have. Yeah, the
message receiver has a responsibility too. And there's abdication of
responsibility versus delegation, and people who follow the thought leadership
of gurus should not abdicate the responsibility to educate themselves (on the
gurus earlier work, even). But society would be awfully cumbersome if we
couldn't delegate responsibility and it takes time and experience to find the
line between abdication and delegation. And who can develop that experience
without delegating some of the time?
We need to remember that even though many of us went through the visual
modeling wave and the patterns wave and so forth, developers "coming of age"
after each of these waves did their hype burn-out thing, need a new round of
enthusiasm and education on practices we might consider old news.
Hey, on the other hand, I suppose we could start a podcast series called
"Software Improv"... ;-)
Image: Tim Ottinger tweet on 3/1/11
2/28/11 To Engineer or Not to Engineer -- To Quibble is Human
provoking post from Thomas Jay's alter-ego. Personally, I like to think my
background is in software engineering, because I align with the engineering
ethos. For example, these words resonate with how I see myself and our field:
...day-to-day engineering work
is energized by a unique belief system which forms an enduring and
coherent engineering ethos. It proposes that the engineers' view of
the word is at once formative, utilitarian and reductionist. Good
engineering practice comes from the productive synergy of these
elements. ... These lead to separate but complementary aspects of an
engineering ethos, namely seeing the world as essentially
problematic, as a commercial challenge and as an opportunity for
continuous, useful, material development. It is argued that,
together, these three outlooks empower the practice of engineering.
-- JE Holt, On the nature of
Mechanical Engineering Work - An Engineering Ethos
Software development doesn't own iterative and incremental. And
concurrent engineering didn't even start in software, so far as I know. At
least, we were borrowing ideas from concurrent engineering and leveraging them
into software development practices in the 90's. I do rather suspect, given what
I have observed in practice, that other engineering disciplines do a good sight
better job using simulation to test design ideas than we do, for example. We
just have to open up our frame to allow that there are various ways to do design
in any of the engineering disciplines. At the same time, other engineering
disciplines are learning from Evo and Agile in software. And while it is true
that the dominant practice of doing what design we do in
the medium of code has produced much of the software we have today, we have
also fessed up to the need in our field to do better (witness the wording in the
Craftsmanship Manifesto). Yes, in part it entails sharing and learning from each
other in craft-like guilds. And in part it means creating more shared ground
under our feet, drawing out knowledge that would otherwise remain tacit and
making it sharable. Architects do this explicitly in the
architecture when the rationale explains the thinking behind decisions, and in mentoring their team(s). It is also what research does.
research that learns from what is practiced. And frontier-pushing scientific
research that advances practice.
Software engineering is going to be different than mechanical engineering.
But so is chemical engineering! I think we'd do well to say "if we are an
engineering discipline, what does that mean?" What do we need to do better, so
we grow up to be "enjuneers." (I was interested to find, chatting with IU
faculty, that university politics have muddied the waters. As I understand it,
because Purdue owns engineering, the Computer Science department at IU
Bloomington is restricted from including any classes that are considered the
purview of engineering/owned by Purdue. Go figure.)
That said, we get way too hung up on words. What we do is what counts.
Yes, words have the power to shape what we do, but they also have the power to
trap us in a rat-hole of mutual misunderstanding if we get pedantically brittle
I incline to Feynman's pragmatism here:
'We can't define anything precisely. If we
attempt to, we get into that paralysis of thought that comes to philosophers…
one saying to the other: "you don't know what you are talking about!". The
second one says: "what do you mean by talking? What do you mean by you? What do
you mean by know?"'
-- Richard Feynman, The Feynman Lectures,
You can know the name of a bird in all the
languages of the world, but when you're finished, you'll know absolutely nothing
whatever about the bird... So let's look at the bird and see what it's doing --
that's what counts. I learned very early the difference between knowing the name
of something and knowing something.
-- Richard Feynman, "What is Science?",
presented at the fifteenth annual meeting of the National Science Teachers
Association, in New York City (1966) published in The Physics Teacher Vol. 7,
issue 6 (1969)
Anyway, learning from other fields, and they from ours, is healthy. Our
field-specific learning is crucial, of course. And I think it would be helpful to remind
ourselves that software development is often part of a larger context of product
and system (supporting, for example, financial services, the "products of
insurance companies or banks) development. So when we draw parallels with other
engineering disciplines, and they with ours, we need to use that frame of
In particular, we're talking about innovation and creating competitive
differentiation, about discovery and problem solving. Then we need to ask what
the dominant forces are. Software development for the space shuttle is going to
be different than software development for an iPhone app. A manufactured device
that has manufacturing molds that costs millions of dollars to make, is going to
have different constraints and forces factoring in the design process than
software that not only be changed throughout development, but can generally even
be changed once the system is in use (although the cost of doing so varies
across different contexts). That said, once you think in terms of creating some
40 product variations off a common code base more quickly and more cheaply than
competitors, you're also into a different constraint set for software. And so it
goes. Context factors. And because context factors, embedded software has a
larger lifecycle to sync to.
One could throw lots of arrows at large scale projects like Ford's "Neverest"
and say they should have done the "one block pedestrian mall" (show value
quickly and early) to get suppliers begging for more. But it is still a
challenge co-ordinating many teams of teams to tackle anything significantly
ambitious. And ambitious projects do succeed. We just hear about those
that fail big. And while I'm there. I'll just say that for anything in
development, Hofstadter's Law is at play, so while the deadlines are important
forcing functions, we should ultimately measure success in market terms, for
schedules are at best an educated guess.
Personally, I think our field is interesting and has so many exciting
challenges that we need to learn with huge humility and eagerness, with a great
appetite for analogy (rather than treating analogy as a war of words that raises
our defense mechanisms), and with a willingness to experiment and learn, and
become more self-conscious about moving the learning from tacit to explicit
knowledge. We've made huge strides with patterns and practices. We're not done.
3/8/11: IEEE takes software engineering seriously:
2/28/11 Quiet Backwaters Place
Fortunately tomorrow we flip to a fresh journal page, and all references here
will be buried in an archived wall of words. Bring on
This paper is not directly about architecture visualization, but
about the issue of representing the architecture concepts in the
code, which relates to visualization (for example, so we can
"extract" the architecture):
2/28/11 The Passing of Another Great