A Trace in the Sand
by Ruth Malan
This journal holds a trace of my journey of discovery (at least the part thereof that has nothing directly to do with work with clients). I write to think, to learn, to nudge ideas around and find the insights they were hiding... So, a new characterization for my Trace emerges -- this is my own personal "maker" space, where what I am building through exploration, discovery and experimentation is myself, my point of view on architecture and being an architect. This is, then, a learning lab/playground of a curious mind... hence it is, well, messy! Consider yourself warned! :-)
Point of view? My PoV is: where I see from, what I (seek out and) look at, and what I stand for.
10/2/11 Brain Treats
Important post from Doug Newdick on climate change and IT!
10/3/11 Competition for Watson
Roll on Watson, there'll be a new apple of the media's eye:
"Make no mistake: Apple’s ‘mainstreaming’ Artificial Intelligence in the form of a Virtual Personal Assistant is a groundbreaking event. I’d go so far as to say it is a World-Changing event. Right now a few people dabble in partial AI enabled apps like Google Voice Actions, Vlingo or Nuance Go. Siri was many iterations ahead of these technologies, or at least it was two years ago. This is REAL AI with REAL market use. If the rumors are true, Apple will enable millions upon millions of people to interact with machines with natural language. The PAL will get things done and this is only the tip of the iceberg. We’re talking another technology revolution. A new computing paradigm shift."
-- Norman Winarsky, interviewed in Co-Founder of Siri: Assistant launch is a “World-Changing Event”, 3 October, 2011
We'll know more tomorrow at the [much anticipated] iPhone 5 announcement.
10/4/11: uh, make that the iPhone 4S announcement... media underwhelmed...
I like this alternative wording for relationship platform: (experience-based) engagement platforms (CEOs Must Engage All Stakeholders, Venkat Ramaswamy and Kerimcan Ozcan, October 4, 2011).
Relationship platform leaves the concept open to encompassing relationships between systems as well as engagement or relationships among people and groups or entities, so perhaps it is better for my purpose. Still, engagement platform conveys a defining element well.
10/4/11 From the Stream: Leadership and (organizational) Politics
“The second key is to see yourself not only as a fierce competitor, but also as a broad collaborator. Don't get me wrong: Competition is essential as a spur to innovation. But in a world of increasingly interdependent systems ... the Wild West of competition needs to be complemented and tempered by far more collaboration across old boundaries. Across academic disciplines ... and industries ... and nations ... and even among competitors.” -- Irving Wladawsky-Berger
10/4/11 Watson, meet Siri!
Just kidding! Low hanging humor fruit... But there is a point with respect to user interfaces in that gestural and voice interfaces will find ever more application, not just on mobile devices (though arguably largely driven by such devices because the interaction surface is so small).
And then, the biggie that was missed -- AI redefining search and serve. That has to be on the radar, at any rate. Big mucky data meets intelligent assistant and the world changes! I can't believe how ho-hum the reaction to the iPhone 4S has been. Ok, Steve Jobs wasn't there to rub the iPhone and have its genie appear, but really! Where's the imagination? Think about it -- Apple has given Wolfram/Alpha a voice on the iPhone. Among other things -- like context awareness, machine learning, natural dialog. This holds promise! This low-key introduction is ... downright foreboding! [First they serve and enable us, then they replace us!] And exciting! [First they serve us. ;-) ] We'll have to get our hands on one and see how far along it is on delivering on that promise. 10/12/11: The big question is -- will we recognize that Apple mainstreaming AI on a consumer device was Steve Jobs astonishing-achievement-capping swan song, and did Steve die thinking we were idiots not getting it, but also knowing we will, in time?
And of course there's Watson. Watson -- Doctor Watson to you, Watson to Sherlock Holmes -- the archetype of a sidekick, and a (medical) doctor at that. What a perfect name. A serendipity, since it was named for the founder of IBM, and perhaps also prescient -- as Watson's derivatives emerge, Watson will surely find a role as the intellectual sidekick to many a professional. First up, a physician's assistant.
Kinect helps us see Microsoft as interesting beyond the "productivity" space, but... Apple and IBM sure have the innovation limelight and its all about AI. Google decided it was all about social, so they're going to have to scramble... Although... you know I'd like an affordable robot to taxi my kids, clean my house and keep my yard in order. Who's working on that? Yes, as cars go, Google. But humanoid robots seem to be coming out of Asia...
In the US, we've focused on software that serves mind treats (social, search, Watson and Siri). But there is plenty of room for engineering of stuff, creating tangible products that would make people's lives more the way we want them to be... Smarter stuff -- ecologically and in the sense-respond and sense-inform sense too.
I know that's not an enterprise app... but just translate that to sensors and pulling the finger off the trigger on algorithms that sink stock markets... Not enterprise significant? How about commodities markets? Hm, word of social net and production forecasting? Ahhh. It's all in the analogy dear Watson. :-) The other day I joked to Dana that the Ruffian[-inverse]-Turing Test will be: when humans don't understand it but a robot can, we'll know it's a robot. I think that will go for analogies too. We're getting more binary and AI deals better and better with fuzzy and unstructured.
All told, we're hurtling through revolutions in how we conceive of our humanity (neuroscience, behavioral economics and more are revealing our stunning perceptual inaccuracies, among other flaws), extended humanity (digital information, AI), and ... replacements for humanity (manufacturing robots, digital assistants, ...)
We have to get our house in order with respect to sustainability, but there is so very much to be done, there's so much to be optimistic about! If we can just get the full workforce employed and spending... on sustainable lifestyles, of course...
"When you wonder where all the youthful creativity is; where good old “Yankee ingenuity” has gone, it’s still here. But not in formal education. Anyone who is looking to find the next generation of engineers, technologists and free-thinkers need only go to one of these Faires or visit the thousands of Hacker Spaces springing up across the country. It will leave you breathless…and hopeful."
-- Ira Flatow, DYI Sci, October 3, 2011
Ok, so I totally know the chances are good that this post is a tad exaggerated... but whether experience with the 4S proves the promise of Siri or not, we know that this is an unstoppable direction and we may as well get excited about it so we see what possibilities it holds. And we may as well get excited about regrooving for sustainability because it will create jobs, save money, and leave less impact on the planet and all its lovely creatures. (Uh, you may as well get excited. I'm bleeping terrified. ;-) Or I would be, if you didn't get excited. So let's all get excited and change this world!)
10/10/11: IBM has Watson, and a primary focus on vetted knowledge sources. Apple has Siri giving voice to Wolfram/Alpha -- though I expect the relationship will have its trials... It will be interesting to see what else Siri does, AI wise. Google trawls everything i-way with impressive diligence -- imagine what that could turn into! Amazon's Silk approach is interesting.
10/11/11: This explains more: Why is Siri So Important?
10/12/11: Dana, who cuts wood as well as code, tells me that there is a big furor about the tablesaw blade sensor thing in woodworking circles.
10/13/11: Apple's Siri Is as Revolutionary as the Mac, James Allworth, October 13, 2011
11/1/11: Update on humanoid robots:
Stunning Video of PETMAN Humanoid Robot From Boston Dynamics, Erico
Guizzo, October 31, 2011
Our software culture is so black-and-white (I like to say binary, 'cos I'm a geek -- not so's you'd notice, but non-geeks do) that we miss a lot of the in-between shades. Like red. ;-) What am I on about? Oh yes. My point! Well, it's like this, see. We get our knickers in a knot because 'construction" is a bad metaphor for software because it is creative, inventive, exploratory, entails discovery even of what we're about (you know, "requirements" that emerge as we develop the system). And we get our knickers in a knot about applying the "engineering" term to software development because we build schlocky systems that "real" engineers would disassociate from. See a problem? It's staring us in the face! We need to be doing "real" engineering because we're developing production systems -- systems that will "go live" and have if not lives, the economic productivity and vitality of our companies and communities dependent on them. But we're experimenting our way to them. We think we can test our way to engineered excellence, but we don't know what the "real" requirements are, and sometimes we have but the scantest clue how we're going to accomplish the "requirements" we "know" we have. And yet we need to deliver code into "production" -- into situations where our systems need to be dependable.
If we recognize that we have threaded two processes together and then turned a blind eye to their different needs, we have the chance to do something about it! Two processes?
Going from the messiness of our discovery-oriented process to the well-factored, tested integrity of our engineered system shouldn't be considered rework or waste! Unless we leave it until after it has sorely impacted users and our business viability. That is waste.
Our process must iterate through discovery and engineering continually, interweaving exploration with "refactoring" (loosely put -- meaning redesigning/rewriting) throughout the life of the system. The nature and focus of discovery may change over the lifecyle, but there is ongoing need to accomodate to the changing landscape into which our system fits, changing demands, changing understandings about what the system is and how to address the demands, forces, challenges the system operates under. And as all this evolution happens, there is a need to continually work to shape and reshape and maintain the integrity of the system.
10/23/11: Great code is written twice (or more), Roy Van Rijn On October 21, 2011
As we proceed in the fog of uncertainty, entropy grows -- and produces more fog! Under uncertainty we "give things a try"; accept good enough, and try the next thing. As entropy grows, it introduces its own uncertainty...
3/7/15: Observation: I was inviting exploration of discovery and its consequences, and engineering and its demands, for software development. Discovery is simply part of the game; there's no way around it. But discovery is experimental -- we try things out, we must give some avenues up, explore others further. And we're working discovery on two fronts -- what the system needs to do, and how to get it to do that. There is another arena of attention -- the structural integrity of the system.
A certain Professor Alexander told us in his strategy class that managers don't read, so use short sentences, short paragraphs, and write brief briefs. I broke every one of his rules. Got an A. Flout confining expectations. Break rules made to level. I did and still do. I break rules of grammar. When it suits me. I nest. Clauses. I iterate, and conceive in parallel. And expect you to too. I don't try to be colorful. But shucks (the word of the day today on Twitter; getting old already), no-one at all would read this Q<= were I not colorful! There are just too many bases on which to discount me. Oh. Right. I am discounted. Few read here. Rats. Short sentences then. That'll do it! Right? Oh.
The biggest key to success as a writer is not short sentences. It is writing. And living an interesting mind-life, so we have grist for our thought mill. An architect as writer needs to have something interesting and important bubbling under the surface. So interesting and so important that she -- or he -- cares enough to write it out, think it through, improve and learn how to pitch it. Or pitch it. Bob Dylan and Leonard Cohen both write, and walk away. From a lot of what they write. Writing is to learn. To conceive. To frame. To draw out. So we learn what we know, what we have within us melded from our experience and interacting with other's minds and their experience and sense-making. So we see what is important -- useful and a priority to draw attention to. And how to express it. Thinking it through shapes how we draw it. How we talk about it. What we draw attention to. And explain or justify and persuade others to help us do, or to do differently.
Our audience is busy. Much to attend to. So sure, simple factors. And being interesting and meaningful. And relevant.
Make your own rules, but don't feel you have to start from scratch -- here's a collection and a reminder:
As essayist, programmer, and investor Paul Graham has written, "Writing doesn't just communicate ideas; it generates them. If you're bad at writing and don't like to do it, you'll miss out on most of the ideas writing would have generated." --
Jocelyn K. Glei, 25 Insights on Becoming a Better Writer
And sure, code counts as writing. But code tends to obfuscate
architecture and we need to set architecture in bold relief so that we
can study and improve it, test it out not just in our mind's eye (with
that so-limited working memory) but with the interaction of our and
other minds on it.
The sad news reminds us of our mortality. We could not hold onto Steve Jobs, nor he onto life, despite all the will and money in the world. There are many ways we can make a difference and one of them is to see the great in what others do. If there is anything I admire Steve Jobs most for, it is that. The more greatness we see in the world, the more we have to draw on to make something new -- something "insanely great." But first to see and appreciate. Combining an aesthetic appreciation, the ability to wonder at, with a sense of possibility, the ability to wonder. Then to envision a new concoction of greatness to make something fantasmically new in the world. And then to demand greatness in execution of the idea, making tough choices to retain the essence that is great, and to appreciate it enthusiastically, even child-joyfully, as it comes to fruition.
10/6/11 Famine in Somalia
Children by the tens of thousand are dying from the tortures of famine -- "a tragedy unfolding" (thanks for the pointer Daniel). Drought-induced famine that is complicated by war. We can do something immediate -- the Red Cross, UNICEF and WorldVision, among others, are providing aid in the drought-hit region.
If climate change goes the way scientists are predicting, horrors
like this will mount. Our neighbor is an environmental physicist. He has
installed solar, cycles to IU, and drives a hybrid. His actions match
his words. Our tastes and priorities are changing, but
I feel like
conspicuous consumption and climate evil personified driving an SUV. We
are changing habits and purchasing choices, but on a budget some just
take some time to roll over. But. Too many "but"s. We have to get past
Today I see that Doug Newdick illustrated what Conceptual Architecture is good for -- a Conceptual Desktop Architecture. Great post Doug! And (serendipitously) well timed!
Aside. I like Doug's use of hand-sketching on his diagrams. They are very useful illustrations that really add to his text. And the hand-drawn conveys the human touch so eloquently.
I believe Doug is right about Conceptual Architecture being important. And right that it is a neglected view. But not in our book. Well, our book is under substantial (complete!) revision. Which is why, among other things, I'm trying to get the framing (more) right -- or at least as right as it can be given the current set of understandings we (the field) have. So thanks for the carrying the conversation along -- even though Doug wasn't aware of my piece of the conversation/this Trace entry [which I have trimmed down now that it served its purpose, after its own fashion].
10/25/11: Thanks to Stuart Boardman for helpful questions and suggestions -- I integrated some responses and have more still to do, but the conceptual architecture write-up is getting better. Oh, it still needs work, including trimming away redundancy... so, suggestions welcome. They don't even have to be gentle, but...
I liked this (very short) animated video: The Ride - by King & Country. These Rube Goldberg-styled "machines" so make me think of visualization of executing software, and this one is even more delightfully apt as it is a fantasy world with allusions to the real world -- a construction of the (analogical) mind, as software is.
And the visualization (right), from the video, of a system as interacting mechanisms is wonderful!
Isn't that just what our systems would look like, if we could take the skin off?... :-) Well, as I see it, it would have various defense shields. But our systems have to survive in a hostile world.
And so it goes.
Now you're convinced I've lost my marbles... ;-)
Image source: The Ride - by King & Country
10/6/11 EA KPIs
This appears in KPIs for Enterprise Architecture:
# Frequency of updates to the enterprise architecture
Now, would frequent updates be an indicator of goodness or badness?? Good -- responsive to emergent need. Bad -- was way off base to begin with... but good if responsive enough to fix it... And so it goes.
Well, indicators are just that -- signals to look deeper. Just so long as management understands that.
10/6/11 Why We Sketch
Via Peter Bakker:
"Up until now, they were talking about WHAT they were trying to do. Now, they could talk about HOW they would do it.
The WHAT was now on the whiteboard—and in everybody's head. For the first time, it was the same WHAT everywhere."
-- Jared M. Spool, Why We Sketch, Sep 22, 2010
10/6/11 Decisions of the Moment
Not all decisions are equal (in import, scope of impact and consequence). The architect needs to be able to assess (and given the authority to decide) which are architecturally significant, and when to make* those that are. Some need to be made early, just to get ground under the feet to move forward on.
"a long and rapid succession of suboptimal design decisions taken partly in the dark."
-- Philippe Kruchten, "The Architects -- The Software Architecture Team," Proceedings of the First Working IFIP Conference on Software Architecture (WICSA1). Kluwer Academic Publishing 1999
* collaboratively, of course, but with the authority to make a decision if the collaborative/consensus process stalls.
10/6/11 Great Talk!
Michael Feathers "Code Blindness" talk is wonderful, and he really positions software visualization well!
Michael Feathers: to get past code blindness:
- more monitoring of applications -- find indicators of problems in code bases
- boiled frog syndrome -- those closest to team are not able to tell where the problems lie
- planning for replacement
Michael Feathers: metrics are dangerous when you lose context, or when they become goals (in of themselves)
10/7/11 Opportunities to Influence
The Fall Ballet at IU tonight was wonderful! And our world is decked in resplendent color.
A good note to close on...
I also write at:
- Bredemeyer Resources for Architects
Su Mo Tu We Th Fr Sa
2 3 4 5 6 7 8
Visual in Architecting
- Great Talk! (Michael Feathers)
Role of the Architect
- Serendipity Serves (leadership)
News, Trends, Directions
- Sad News (Steve Jobs collection)
- So Sorry (Dennis Ritchie)
Architects and Architecture
- Todd Hoff (highly recommended)
- Anna Liu
- JD Meier
Architect Professional Organizations
Agile and Lean
Agile and Testing
Other Software Thought Leaders
- CapGeminini's CTOblog
CTOs and CIOs
- Werner Vogels (Amazon)
- Jonathan Schwartz (Sun)
CEOs (Web 2.0)
- Don MacAskill (SmugMug)
- Wired's monkey_bites
Social Networking/Web 2.0+ Watch
- Dan Roam
- David Sibbet (The Grove)
Strategy, BI and Competitive Intelligence
- Freakonomics blog
Um... and these
- CNN Money Business of Green videos
Tim O'Reilly held Google to its declared best self in his "What Android can learn from Steve Jobs" keynote address at Android Open -- in the most gracious way, of course. His address was a mix of what Google is doing well -- where it expresses and aligns with its soul -- and a call to do more of that. It was a courageous (ok, Tim can afford to be more courageous than most, but still he didn't try to protect powerful relationships by pulling his punches) and inspiring keynote and I'm glad I was able to catch it on livestream (even though it puts me seriously behind on today's commitments!). The slides are here, but I recommend the video (scroll down to get to it) -- along with the slides.
The address is more pitched at Google than the (rest of the) Android community, but we can see Tim really set out to speak for the community, to call Google to be its best self, to be true to its stated, projected image -- to do no evil, to not be closely controlling and so close off others from leadership roles in the Android ecosystem.
It is a wise talk, full of strategic savvy, and I highly recommend it to architects wondering what strategy means to technologists. :-)
By extensively quoting the "founding fathers" of Google, Tim not only made his intended points, but built the image of the evolving "creation story" of Google, strengthening his "be true to the self you project" call. We do, after all, want to be known for the best we see and value in ourselves. As Tim points out (quoting Kurt Vonnegut's "You are what you pretend to be, so be careful what you pretend"), by stating how we want to be seen, we call upon ourselves to live the image we project. If we do not strive to live up to that projection, the world calls us on the disjunction, and we are seen as inauthentic and lose trust.
This, from a slide in Tim's talk, is an inspiring -- and daunting (will we fix the environmental mess, or make it worse) -- way to position what we're about:
The "hardship for us all" is a bit jarring coming from Sergey Brin, but he does make a good point all the same!
12/7/11: I don't know how much cash
Google had socked away by 2008, but with
$43 billion in cash in 2011, "hard for all of
us" leaves the taste of disingenuity in one's mouth.
Showing how bright ideas can be really simple: water bottle lighting.
And dim ones can backfire: Netflix VP: Why We Moved "Too Fast," And Why "We Were Wrong" On Qwikster, Austin Carr, 10/10/11
At least they're saying: "We were wrong."
Ok, HP board -- your turn!
What happened at Netflix and HP sure
demonstrates the power of social media.
I relate this (my italics):
More than a week after Stalin’s death, Eisenhower was talking with speechwriter Emmet Hughes about the address. “Look, I am tired—and I think everyone is tired—of just plain indictments of the Soviet regime,” Ike said. “I think it would be wrong—in fact, asinine—for me to get up before the world now to make another one of those indictments. Instead, just one thing matters. What have we got to offer the world?”
-- The Origins of That Eisenhower 'Every Gun That Is Made...', Robert Schlesinger, September 30, 2011 (via Tim O'Reilly)
to Jobs' key point in a presentation to Apple employees in 1997 (via Tom Graves):
"To me, marketing is about values. This is a very complicated world, it's a very noisy world. And we're not going to get the chance to get people to remember much about us. No company is. So we have to be really clear on what we want them to know about us." — Steve Jobs to Apple employees, 1997
Me-too products (simply copying others' feature sets) lead to more complexity with no distinctive meaning and value, forcing a battle on price leading to ever more tightening of cost reduction screws.
What is common to those two Eisenhower and Jobs "great speeches", is the call to find and be true to distinguishing value.
Aside: I like the image of the "bite" out of the Apple apple that is Steve Jobs' profile, and the way the apple logo itself suddenly now seems to be a candle! Who did that? It's really impressive!
11 October 2011
Looking again at modularity, I discovered by a neat serendipity that Bill Shackleton tweeted a set of links to very useful papers on modularity last night! Timed that well, he did! :-)
So via Bill:
- Managing in an Age of Modularity, Carliss Baldwin and Kim Clark, HBR, 1997
- Papers by Carliss Baldwin
- Modularity, Architecture and the Role of Knowledge, Richard Tee, 2011
- Valuating Modular Architectures for Cross-Company Electronic Interaction, Schmid et al, 2009
- Industry Structure and Modular Product Architectures: An interpretation of the bicycle industry, Juliana Mikkola, 2002
I hadn't read the last of those, and my Modularity and what we can learn from Trek blog post was independently derived, but serves as a complementary companion, perhaps.
Now I need to read:
- Where do transactions come from? Modularity, transactions, and the boundaries of firms, Carliss Baldwin, Industrial and Corporate Change, 2007
for its implications as we move to less hierarchically composed organizations. The section on modularity theory is a wonderful trace of (some of) the history.
My interest in modularity is from a software and system perspective, but I got pulled off course for a spell by Carliss Baldwin's article which intrigued me because we think that organizations will move in a direction that is more networked and "podular" with more fuzzy boundaries which raises interesting questions about transaction costs. Peter Bakker pointed to the wikipedia entry on community structure which is interesting.
10/18/11: Modularity in Technology, Organization, and Society, Richard N. Langlois, Economics Working Papers, 1999
12 October 2011
It is interesting that we have moved from Douglas Engelbart's conception of augmented human intellect more to human-computer symbiosis (with roots in Licklider's work, but a referent, for example, in Tim OReilly's articulation of a network-mediated global mind). That is, implicitly and I think unselfconsciously, we have shifted from computing as enhancement to computing as peer. The next step in the trajectory -- human-augmented computing?
In important ways, we're already there. For example, in software visualization, we more sensibly depict system structure derived from dependency and semantic analyses (algorithms) when we factor in the (human) understanding of the team. That is, we're moving into the sphere where our (so-far uniquely) human proclivity for 'hunches, cut-and-try, intangibles, and the human "feel for a situation"' (Douglas Engelbart) is needed to augment the powerfully "rational cogitations" of compute intelligence.
Of course, all this is in service of humanity (environmental catastrophes aside). For a while yet. At least. ;-)
"The point is that tools are always going to be used for certain things we don’t find personally pleasing. And it’s ultimately the wisdom of people, not the tools themselves, that is going to determine whether or not these things are used in positive, productive ways."
-- Steve Jobs, The Playboy Interview, 1995
Our What it takes to be great paper played a cornerstone role in our field, building recognition that architects are not uni-dimensional technologists. I think the Getting Past ‘But’: Finding Opportunity and Making It Happen paper was likewise a scene-setter, but more, it is one of those inspire and enable pieces. It puts together story and group graphics, concurrent development, model-storming and sketch prototyping with its fail early/cheap/learn kind of agility, and more.
As covered in the paper, agile architecting prefaces development iterations with an extreme-agile approach that iterates on the system concept and its architecture, exploring initial approaches and creating sufficient context for the unfolding adaptive development of the system. That's not the end of agile architecting, which continues throughout the life of the system. But contrast this approach with setting out on a development path that relies on the miraculous vision of a product owner who somehow internalizes not just what the system should be but what it will take to build it. Hm.
It's an important paper. I would write it a lot differently today. But that doesn't make it irrelevant or not worthwhile.
As for Fractal and Emergent... :-) Well, it pairs well with A Kodak Moment to Reconsider the Value of IT (Robert Plant, October 12, 2011) and, actually, Steve Yegge's Google Platforms Rant. Not to mention Greg Satell's A Radical Shift Toward Design (October 2, 2011).
I enjoyed Alistair Cockburn's Effective Software Development presentation (although I had to forward through the certification bits, and such; not enough time in a day for every distraction... oh yes, and work)! I wonder if Alistair adopted the accent intentionally, or if he just has a talent for absorbing context?
Get It While You Can!
Read this right now (you'll thank me!): Steve Yegge's Google Platforms Rant
Some Neat Visualizations
'“architecture is experienced habitually, in a state of distraction.” Architects must then weave ways to flow people and resources through built environments' -- Danzico, Enforced Listening Moments, Dec 1, 2009
Isn't this what your system needs: Chopin's for Dummies, Jeremy Denk, Nov 30, 2009? Well, of course, not exactly that, but that kind of enthusiastic and acute-but-sensitively informed telling of the meaning and import and intent of its structure and mechanisms?
13 October 2011
Visual Bits from the Tweet Stream
Via Peter Bakker, among others:
- Thinking Maps
- Parson's Journal for Information Mapping (for example: Consumer Insight Maps: The Map as Story Platform in the Design Process)
- Adobe Museum of InfoVis
- ComplexityGraphics (awesome!)
Some other sites were mentioned, but they are already on my list: Visualization in Other Fields.
Interesting piece of visual art history: 100,000-Year-Old Art Studio Discovered, Cynthia Graber, Scientific American, October 13, 2011 (via Maria Popova/@brainpicker)
So Sorry to Hear That!
Dennis Ritchie died on October 8. It makes news today? Now that was a life that made a difference! Funny how we don't think of it that way, until it's in the rear view mirror. Well, it's humbling and enriching to think of what Dennis Ritchie gave us. Unix and C. And all that stood on the shoulders thereof.
10/14/11: Rob Pike's reflection is worth reading.
Of note: he doesn't take away from Steve Jobs to give to Dennis Ritchie. This is not a competition among dead heroes for our worship! This is our huge shout (and sad sigh) of thanks, and an expression of wonder at having lived when these men lived and made a difference -- changed the world, even!
- The C Family of Languages: Interview with Dennis Ritchie, Bjarne Stroustrup, and James Gosling, Java Report, July 2000
- Dennis Ritchie, Herb Sutter, 2011-10-12
14 October 2011
Mortality sucks. But it reminds us to care for and appreciate one another because we blaze for such a short time. We have this chance -- this short lived opportunity -- to feel the presence of others in our lives. To connect through the rare act of seeing -- of putting aside the huge mirror of self that prevents us from seeing -- into the mind and heart of another. And by seeing, by putting our great big oofish self aside, we are enriched, in our view, our empathy, and the material we have to draw on to make fresh connections, lovely in their unique composite form -- more lovely because they drew from beyond the limits of our self.
We can choose how we see ideas and the people who associate themselves closely with their "thought children." We can see them as threats to our dominion, or we can see them as they are, especially as they are in their best light. I suppose we need warriors -- not convinced, mind*. But I do so much more lean to those who are compassionate, kind and generous. They make the world better in our experience of it.
"What unites the CIRTL Network universities is a commitment to developing a national STEM faculty better prepared to teach, through three core ideas: teaching-as-research, learning communities, and learning-through-diversity," -- Robert Mathieu, ACM Newsgram, 10/14/11
I was struck by that last: learning-through-diversity. I realized that I have an opportunity I wasn't valuing highly enough -- being in a distinct minority, everyone I work with is very different from me. That's a learning-through-diversity opportunity of huge proportion! Right? I get more opportunities than you do to work in contexts where styles and viewpoints are very different than my own. Rats! I should be so much better at this than I am. All that opportunity 'n all.
* Given that there are warriors, we need warriors, but what if we were free of warriors?
14 October 2011
Call for submissions for the SEI Architecture Technology User Network (SATURN) 2012 Conference. The SATURN 2012 Conference will be held in collaboration with IEEE Software magazine and will take place May 7-11, 2012 in St. Petersburg, Florida.
I enjoyed doing a tutorial at SATURN in 2010, but unfortunately the conference overlapped with Ryan's graduation (the culmination for most in his class of 9 years at the Montessori School, so a big deal -- with speeches!) so I missed the chance to sit in on other presentations and tutorials. Linda Rising, Rod Nord and George Fairbanks sat in on mine. Rod Nord is beyond compare -- wise yet child-open and joyful at discovery. He enriches experience, and it was a privilege! I've since watched Linda Rising on youtube, and she's the Ellen Degeneres of software, or something like that! I mean, she's an engaging and dramatic speaker.
And HP could make it happen! Turn them engineers loose on defining the future! Tell Meg Whitman I said so. That'll have some influence.
I mean, I love Apple and Asia and all, but the future needs options.
There's so much opportunity! To something like Watson or Siri, add
- Life-like cells are made of metal, Katharine Sanderson, 14 September 2011
- A New Twist on Artificial Muscles, Prachi Patel, October 14, 2011
Computing has so far to go, even when we only take into account what we already know! Just think -- the "not your father's PC" of the future will not just sit there on your desktop, mute and inert! Between here and there, there's a whole lot of innovation to conceive, realize and popularize!
Or going the other direction (from more human-like computers to more computer-like humans), imagine adding this to something like this! It doesn't have to be like this. Makes Kinect look passé doesn't it?!
I know that's a far cry from the cut-throat business HP's computing business is in today. But competition is only cut-throat if you can't imagine -- and execute on -- being compelling!
Ryan told me he came across this (I'm paraphrasing) on the i-way yesterday: "With Lion, Apple reveals that it has given up on dragging us into the future, and has left without us."
But hey, it would be so much fun to make this movie today -- you know, a movie set in 2036, or thereabouts. The backstory is here: The Making of Knowledge Navigator, Hugh Dubberly, Mar 30, 2007. And here's a downer (not). :-) In that light though: Paul Allen: The Singularity Isn't Near. ;-) I mean, consider this: Robot Biologist Solves Complex Problem from Scratch, October 14, 2011 (via BillShackleton). 10/20/11: Uh, but then there's this: Accelerating change or innovation stagnation?, Richard Jones (also via BillShackleton).
Being visionary isn't all about going into the future and yanking the rest of the world into it. But it is important to have a sense of identity that is tied up with a sense of a destiny worth making happen. More tangibly and in the nearer term, what we know is that in the "Internet of Things" computing value network know-how is going to factor big. And there's invaluable know-how that you get from doing it, and then doing it better!
11/13/11: Challenging Microsoft to push harder on the vision thing: A Brief Rant on the Future of Interaction Design, Bret Victor, Nov 2011
11/13/11: Internet of Things Platforms (relationship platforms focused on things). (via Dave Gray and others)
"Any time you do something big, that’s disruptive — Kindle, AWS — there will be critics. And there will be at least two kinds of critics. There will be well-meaning critics who genuinely misunderstand what you are doing or genuinely have a different opinion. And there will be the self-interested critics that have a vested interest in not liking what you are doing and they will have reason to misunderstand. And you have to be willing to ignore both types of critics. You listen to them, because you want to see, always testing, is it possible they are right?"
-- Jeff Bezos, Amazon Shareholder Meeting, June 7, 2011
Read the whole transcript -- lots of lessons!
Follow Up Required
Note to self: need to follow up/verify this.
10/15/11: That is, I wanted to look up the PopSci reference -- here it is:
- What Is the Memory Capacity of the Human Brain? Paul Reber, April 19, 2010
and also remind myself to dig out Ralph Merkle's take -- here:
- How Many Bytes in Human Memory?, Ralph C. Merkle, October 1988.
Twitter Occupied Too?
Well, Ryan is finding this a good time to be singing Bob Dylan songs from the '60's. The youth of today care. That matters!
16 October 2011
Stuart Boardman's Monet Revisited post makes points that overlap synergistically with some I've made; besides he writes with eloquence and style (of the kind I think is iconically captured in the xkcd hat guy) and a unique slant lighting new insights! :-) So, yeah, sure, I droll on about the value of sketchy (double entendre intended) and hand rendered in a very technology-glitzy and implicitly mechanically prescriptive world. And, of course, Stuart's wonderful post goes further too, to survey other innovations in terms of how we conceive and practice EA (and the very enterprises themselves).
Because I think what Stuart was saying in Monet Revisited is complementary to, and enhances, enriches, expands, deepens, provides a different angle on, some of the points I've made, I'll quote myself to add to the conversation -- by which I mean, I'm not trying to say "I told you so" but rather, I'm trying to say this is an "and" world, where we brighten each other's conceptions by holding conversations (albeit asynchronously and using our blogs as avatars that speak for us). In that spirit then, this from my Trace on July 5, 2011:
"Though just an anatomical study, it foreshadowed the sculptor's later efforts to reveal essence rather than merely copy outward appearance." -- wikipedia
"There are idiots who define my work as abstract; yet what they call abstract is what is most realistic. What is real is not the appearance, but the idea, the essence of things." -- Constantin Brâncuşi, wikipedia
I thought that was interesting. Software architecture is like that, isn't it? About finding and expressing the essential structure and nature of the thing. So we struggle with how to talk about it -- abstraction, compression, ...? To me, in advance (of the built system) we could say it is abstract (as in the mechanism of neo-modern art) though by reference or allusion (metaphor/analogy/symbol/imagery, ..., patterns) we draw in potentially huge influence and experience so compression plays a very important role. Once built (at all, and as the system evolves), compression (the very real, actualized meaning of whole chunks of the system gets compressed into the elements and mechanisms we represent) and abstraction (selectively eliding and occluding detail to reveal the essential) figure. In any event (expressing design intent or reflecting design as built), abstractions are central. I suppose, if I must, I could dance on the head of a pin and say these abstractions (entities in essential form) are compressions (they draw into compact form much meaning).
The neat thing is that whole fields of representation (including aesthetics and semiotics) swoop into relevance. :-)
Brâncuşi was, I gather, objecting to bundling his work into a class of art for which there is no evident relationship between what is real and the art; rather, he protested, his work captures the essence or the idea of the thing. It cleaves away the inessential and compresses the essential into a powerful expression of the essence of the reality. In other words, rather than convince me that his work is not abstract, Brâncuşi defined abstract for me, at least in so far as it applies to his work. For Brâncuşi was very much after identifying and capturing the essential identity and form (with its implications for function) of the thing he was sculpting. (See, for example, the evolution of his Bird in Space.) In discovering software abstractions, we can go from concrete instances to a generalization, eliminating detail from the concrete to find the more general, more abstract common form. But seeking the idea, the essence of the thing, supports more degrees of freedom in realizing concrete forms that retain the essence.
I was pitching that at software architecture. But, about a dozen years ago, when our IT architect clients were urging us to help them figure out what to do in the Enterprise Architecture space, they made the point that the way we think about architecture of systems speaks, with a context shift of course, to other kinds of systems -- those that are more software-intensive and others that are software-complemented-wetware intensive (the wetware, in my use, being a reference back to that Merkle piece).
I alluded to Brancusi, and Stuart to Monet, to make the point that detail, inherent in realism, obfuscates. My points are shaded differently than Stuart's, so, as I said, they are complementary, and add up to a more rich view.
What pops out freshly for me, from revisiting my thinking extended and freshly nuanced by Stuart's points, is that there is huge value in that "room for interpretation" that more abstract representation rather than less rigorously detailed specification leaves open. This also harks back to one of our earliest principles -- that of minimalism (see here and here). We want to fix in detail only what we need to, to focus on and enable strategic outcomes. For the most part, we want to simply set sufficient context so that we enable good, right things to happen. Where "good" (in our good, right, successful sense) is technically sound, and "right" is from a stakeholder point of view -- meets stakeholder needs, goals, alleviates frustrations, delights them, makes their lives more the way they want them to be, creates and fills aspirations, provides meaning, and so forth. Which also relates to the Fractal and Emergent which probably bears a closer reading now... ??? ;-)
Talking through some related ideas, Dana said "That's why Jesus taught in parables." In other words, stories play a similar role.
Key points I take from the interweaving conversations then:
there is something powerful in creating aligning context, without going into tightly prescriptive detail, shaping an enabling space in which the team is free to be creative, to be innovative and draw on their expression and interpretation yet be aligned and in harmony and concert.
there is something powerful in focusing on the essence of the expression, rather than all the myriad details of reality -- "reality" in our experience of it, is more vivid and better understood as a result of this "amplification by simplification" (quoting Grady Booch quoting Scott McCloud).
And some other stuff. (I thought of a 3rd point while off line... that "memory almost full" phenomenon or the impact of just 4 hours sleep last night???)
Well, I could pull out other pieces (of my long and arduous conversation with myself that is this Trace) that relate to this vein of conversation, but onward! New fields to plough!
Aside: Remember that the Less is More paper was written in 2002. I would write it differently now (so don't be too critical of dated wording), but it still is ahead of many people's thinking about architecture...
10/28/11: Can you be a little less specific?, Tim Hartford, 10/28/11
Reflections on Abstractions
Briefly, just to pull these into my notes:
Above and below -- studies in networks of influence. Both abstract. (Dana took the photo below in Canada a few weeks ago.)
Yep, Dana's been traveling here there and every where -- just in the past two months he's been in Africa, Canada, variously in the US, The Netherlands, etc.
But... Dana needs to go back to South Africa -- we're about out of rusks.
10/17/11: Daniel, most valued and awesome scout that he is, pointed to this piece on Gertrude Stein which well fits this thread. You know, Gertrude Stein of The World is Round, and other things. ... I suppose I have either irreparably sunk or redeemed myself in your estimation by drawing on children's literature in the context of computing and enterprise architecture. Sunk...? Oops. Well, at least... I'm .... courageous. ;-) But The World Is Round has a poignant message in this world where the frenzy of social connection is ... um... deep and important post could be written... but... busy!
10/1/11: But, Philip Hellyer is quoting Lemony Snicket, so I'm in good company. :-)
16 October 2011
Our Visual Architecting Process poster (below) has always been hand-drawn, reflecting our values around communicating organically and in high human-simpatico terms. Dana never turns the projector on most workshops, and I'll go days without turning it on. A few people hate it, being concerned we aren't "covering the material," but for the most part that responsive-to-the-moment works well -- for people who will give up the need to have some kind of paper control and trust the instructor to set context and create that learning crucible so that the architectural thinking lessons are drawn out and practiced.
17 October 2011
Depicting some dimensions of the role:
Lead -- raising the ceiling for others
Strategize -- thinking about how to solve the puzzle
Architect -- drawing the big picture view
Politics -- relationships (involves, like, taking showers and communicating and stuff)
Politics -- shielding the team from organizational crud
Just a taste of some of the flavors of the role. Not exactly/entirely representative. But enough to cure most developers of architect-envy. ;-)
Uh. Actually, that's just a page from some sketchnotes I took back when. I have no idea what I was thinking at the time, so I came up with some words that seem to fit. Oh, yeah? Like you never do that?
;-)17 October 2011
Ryan was wondering who we have now, to fill the gap John Cleese left. I relayed some of Michael Feathers tweets of late. I was reminded of that, when I saw this:
It's really so much like this, isn't it?
I think Michael would take that as a compliment of the highest order. Certainly it is intended as such.
When I quoted "It's raining diligently?" the chortle was delicious. So (sur)real. It's that amplify thing taken to the kind of peak where exquisite and excruciating meet. Um. Um. No. I didn't mean that. Or... maybe I did.
"It just shows you people have no idea what they are doing." -- John Cleese
18 October 2011
This brain-of-brains thing we've created with the internet has the potential to shift the state of humanity -- in good ways, sure. But also in bad. There are concerns that we and our interpersonal lives are becoming more shallow. We flit through zingy morsels of brain treats served up on the i-way, rather than reading books. We text and tweet rather than conversing and writing letters. We watch youtube on our computers and don't support live performing arts, pressing them to evolve away from their classical roots to be more electrifying. Our tastes shift to satisfying an addiction to the buzz of lots of little eurekas.
I think there's an alternative way to see what is happening, at least in part. Which is to say, for example, we communicate more frequently and across more channels, so if you look at any one channel, it will appear more "thin" but it's just a slice -- diced, moreover, into smaller chunks. But is the communication shallower or more rich, taken across all the parts and allowing, too, for emergence? Are we reading more shallowly, or are we following our interests more passionately, going deep but also leveraging the facility the i-way has given us to tie together sources that bring the everyman into the state of being a polymath or Renaissance man? Are we interacting less, or more with people with like interests, distributed globally, who encourage and fuel our thinking?
I think the potential is there. But we have to make better use of it.
I so liked Jeremy Denk's "Love is Complicated" post -- and Janet's wise comment (August 17), along with bratschegirl's (August 26), in response.
You might want to take a note of bratschegirl's observation (quoted below), bearing it in mind when next you're discussing your architecture with the management team:
"In the orchestra world, we have a similar phenomenon when it comes to contract negotiations. It’s the inevitable discussion of dress code. Janice Galassi had a wonderful take on this some years ago, and what she said basically boils down as follows: Board presidents often don’t understand the ins and outs of multiple doubles or why the principal horn needs an assistant on a program containing the complete orchestral works of Richard Strauss. But clothing? That they get, and therefore they bring it up. Once it’s all over, of course, it’s rather amusing to recount such things as the objection to velvet onstage because it’s “too black,” but it’s rather tooth-gratingly hard to get through with a straight face at the time." -- bratschegirl, comment on Jeremy Denk's "Love is Complicated" post
19 October 2011
"You can see a lot just by looking." -- Yogi Berra
When I heard that, I thought about software visualization and how it enables us to see a lot (anomalies, patterns, relationships, etc.) just by looking. But, seeking the context in which Yogi Berra was supposed to have said that, I found this: She Was Seeing At Me, Mark Lieberman, Feb 9, 2008
The cartoon connects nicely with the Howl's Moving Castle reference (which I eliminated from this facade). The snippet from A Scandal in Bohemia is also marvelously architecturally significant!
Sometimes we look without seeing, and see without observing. And sometimes we're not even looking, until someone changes how we view something, and then we see. Like code we know like the back of our hand. But really, how well do you know the back of your hand? Its structure and texture, anomalies and emergent... omw! ;-)
- Visually punny
- DDT (huh? development driven testing -- the new big thing! It's even -Driven and has a 3 letter acronym...)
One of the Mythbusters declared "It was more messy than I thought it would be." The rejoinder? "That'll be the title of your biography." And an apt subtitle for my Trace!
pointed me to Greg Wilson and Jorge Aranda's article in the American Scientist titled "Empirical Software Engineering." And that article pointed me to this wonderful paper on sketching in software development, and the use of diagrams to understand, to design and to communicate:
- Mauro Cherubini, Gina Venolia, Rob DeLine, and Andrew J. Ko: “Let’s Go to the Whiteboard: How and Why Software Developers Use Drawings”. CHI 2007.
Now I do think that reflecting on the results of one study done internally within one company and treating UML as a closed matter is... unfortunate. That the company was Microsoft is at least interesting, given rivalries and NIH... If the company studied had been an active user of UML, the motivation for and use of sketching may have been about the same, but the notation used less ad hoc. An interesting study would be to compare the retention and evolution of design diagrams in companies where UML is used versus those where it is not. And to compare measures of structural integrity over time.
One might further ask if the "awkward result" we should attend to is the rampant problem of the erosion of code quality over time, and the failure to instill a design discipline that values designs enough to evolve designs in lock-step with code -- maintaining designs and their context and rationale as first class citizens of the software body of work, valued for their contribution to more simple, more sustainable code bases. Useful design views mind you. Which is to say, as far as compute rendered visualization goes, we need to be able to zoom in and out on levels of abstraction and drill into mechanisms with more levels of control over what is depicted -- and animated.
It is unfortunate that we, as a field, got so enamored with the tool-as-shiny-object thing, and then so disenthralled with the overload of class-level models that we swung the pendulum the other way. Pragmatism and balance doesn't lie at the extremes of either-or. Clearly we need to attend more, and more responsively, to software designs. Clearly visualization in design is powerful, and a language that enables us to leverage the visual through various phases of system design maturity and evolution is useful. We use UML in a lightweight way for simple sketches, and it would be useful, I think, if software designers were widely taught to match model rigor -- not to mention levels of abstraction -- to the demands of the occasion they face.
Image source: Watson Architecture presented by Grady Booch at IBM Innovate 2011.
20 October 2011
We talk about the design of our systems being primarily held in the tribal memory of our teams, but some tribes have a more cohesive shared view and others have more fragmented views where the "tribal memory" is a diffuse aggregation of many individual mental models of the system and its demands. We can expect structural and conceptual integrity to suffer in the latter case.
Now, in close cohesive teams, a shared view is attained through organic communication relying on conversations with informal sketches -- using these means to brainstorm designs and design changes, to get buy-in and onboard team members. In teams of teams, more attention -- with more formal ceremony, if you like, complementing informal communication -- is needed to scale essential communication to fit the needs of more technical and organizational complexity.
The conversations are important. The sketches drawn organically to support these conversations are important. But it is also crucial to recognize that there is something absolutely vital to the judicious conversion of these conversations into less transient forms.
The text and the visuals in our documents (in a variety of forms, not just "the" architecture specification) provide ground -- we can cognitively push against them, question and test them, better understand them, transmit them, search them. We can engage with the design as design, we can drop into the fine detail of the code where that suits our need, or work with the design in abstract terms where detail merely obfuscates and clouds the issues we're grappling with. We're enabled thereby to create better designs, to better share the design intent, and better evolve the designs. In all of this, we want to maintain the "personality" -- the point of view -- of those organic conversations. That is, sterile context- and argument- (in the sense of explanation and justification) free forms entail a huge cognitive burden, asking the person trying to understand them to second guess what makes them make sense, and misses the point, really, because "decisions" we don't understand will be interpreted in a hit-and-miss sort of way.
Sure, modeling language tools led us, generally speaking, somewhat astray in this regard, focusing on the diagram and what we can express in notation, but ignoring or de-emphasizing the rationale and the thinking that went into other options that were considered but not selected. As tools go, there's the cognitive load of object and class-level detail that is not so far removed from the detail of the code, and at any rate vast -- even a moderate 1300 class system is hard to grok at the level of class boxes and lines, for example. But the conversational context setting, assumption clarifying, elements are also largely missing from model-focused drawing tools. The "architecture as decisions" counter movement (counter, that is, to the architecture-as-structure -- and expressed in diagrams -- base) is in danger of being one-sided in the other direction. Designs encompass decisions about how to make the system more the way we want it to be. And we express in designs the shape of the system -- if we're disciplined, describing the shaping forces we took into account. We delineate the parts, the arrangement thereof and their interconnections. And in the case of dynamic systems, designs show how the thing works or is intended to work (depending on whether we are dealing with design as intent or design as reflection of what is, in preparation for design evolution or leverage), focusing on key mechanisms -- selectively focusing, that is, to accommodate our bounded rationality and limited powers of system groking when presented with unfathomable detail (like stories-high piles of code).
It is not all about diagrams or models. That's sterile and misses too much of the motivating and explanatory grist and connective tissue of narrative. And it is not all about decisions documented on (textual) templates, for that is sterile in a different way, missing the key context which is design! And it is not about creating a document and expecting it to be the be-all and end-all of design communication. Or end-all it will be!
This is all about enhancing what we do, gaining more cognitive and social traction, and gaining more cognitive and social control over the integrity of the product of our labor. So we need to be sensitive to what helps us achieve those outcomes, and what thwarts them. And it is going to differ, depending on the technical and social challenges of the project in question.
Anyway, it is not about all visual. Or all textual. All document. Or all conversation. It is about the blend that is textured and adapted to the team (of teams) and their context and unique qualities and challenges. "Just enough" is a powerful concept, but requires a degree of savvy in interpreting what it means!
* the personality link obviously makes some directly relevant points, but largely needs translation to design beyond the skin.
10/21/11: Just as TED2012 is looking to leverage technology-extended communicative options, we need to embrace more textured ways to interact with, explore, communicate and enliven our architectures. Tom Graves "architecture as story" post today makes a synergist point, with an EA focus.
10/22/11: As we push more complexity into the technology stack, through abstractions we leverage via higher level programming languages, frameworks and libraries, the technology stack choices gain ever more architectural significance. In effect, we're pushing more of the design into abstraction layers that we leverage, impacting our designs second-hand through the platform (stack) our designs are built upon. If we hold user-facing functionality constant, but add a more demanding environment -- for example, if we open the system to outside influence (integration into systems of systems, allowing distance "health" monitoring and updates, etc.) or simply scale the system -- we add complexity. How we have designed the system to accomplish its functional demands has significant, but not sole, import in terms of our ability to meet these environmental demands. But we will also need mechanisms to deal with these "non-functional" demands relating to system properties like scalability and resilience, some we'll design and implement, but often we'll leverage "middleware." So, our platform substrate or "technology stack" choices matter significantly too, and hence are "architecturally significant" and documented in architecture decision descriptions (so decision templates and tool support for decision capture, use, and update are important). I'm just saying, in this post, that it's not one or the other -- not structure (and dynamics) or decisions, but structure that encompasses decisions along with choices we make about what to include in the technology stack we leverage and build in terms of, as well as guidance for downstream (more locally focused, more narrow impact) design work (in the medium of code, too). And the narrative (oral and textual, visually sketched and more formally modeled) too.
11/1/11: Architecture decisions:
- The Goal Structuring Notation (GSN), rob weaver, April 4, 2008
- The Role of Architectural Decisions in Model-Driven SOA Construction (.pdf), Olaf Zimmermann, Jana Koehler, Frank Leymann
- Olaf Zimmerman's SATURN 2010 presentation slides and his PhD Dissertation. Other papers and presentations by Olaf.
11/28/11: Grady Booch on IBM Watson Architecture:
Drawings from an old sketchbooklet (well, not that old, because Archman only came into being some 3 years ago.)
Alternative views of the architect role... Just to balance out all those "ivory tower" and "architecture astronaut" views... (in case my lines don't convey... arch-superman is getting time back... knight of the holy grail shielding the system ... against bugs and bugbots... and robin arch-hood... is redistributing attention so sustainability debts don't jeopardize all... all are wearing the system properties kiviat as their insignia...)
I also drew the Loch-Mess architect:
Where is the architect? Precisely! ;-)
A lot of people in our industry haven’t had very diverse experiences. They don’t have enough dots to connect, and they end up with very linear solutions, without a broad perspective on the problem. The broader one’s understanding of the human experience, the better designs we will have.” ~ Steve Jobs, Wired, February, 1996
I thought of that quote in relation to Doug Newdick's use of a line from a Gang of Four song (in the post) and allusion to Pierce and Wittgenstein (in the comments). We expand our experience base by doing. Yeah. Obviously. But also by reading. We gain expansive life experience by pouring ourselves into life, and also into the imaginatively created worlds of great novels. We perceive things differently by exploring the mental models and thinking of some of the greats that have graced our world stage, advantaging ourselves of the investigating, questing, sense-making of great explorers of ideas and insight, makers of history, machines, and frameworks for thinking, conceiving, understanding, connecting.
Sure, our physical experience and in-the-flesh emotional encounters are crucial but I think it is misleading to count only our direct experience -- though, apart from its direct merit, it does also give us the base on which to evaluate and "reality check" these other experiences. Still, it is the whole mass of lived through and lived in that develops in us unique material to draw on.
Each advance in products and knowledge spaces raises expectations of further sophistication, more connections, with greater variance all yielding more complexity. The key, perhaps, when it comes to innovation and addressing complex problems is to have enough of the "requisite variety" internalized, in turn demanding of us both greater diversity of experience and greater empathy. Empathy is what allows us to move imaginatively into another's experience, to function effectively at an emotional level. And through it, we accumulate diverse experience without having to be in all those places to gain all those experiences. And diverse experiences enable us to develop principles and see patterns, accumulating know-how and -what and -when, but also further developing empathy and imagination. Both giving us the ability to shift perspective.
So we develop our distinctive point of view, our aesthetic and sense of what we stand for and principles that guide us on the path our values impassion us to take. But by exercising empathy and imagination and actively pursuing curiosity, we become more flexible. Cognitively. And socially.
The diverse experience I have in mind is both technical and non. The technical, from an architecting standpoint, is crucial, but the singularly linear trajectory of acquiring deep expertise is what Jobs was pushing against. You've no-doubt come across the term "t-shaped" but by diverse experience I don't just mean an axis of depth and an axis of breadth, but a multi-dimensional curiosity that dives deep in many areas not always with a clear preformed notion of the "win."
Flexibility is an asset in so many ways. Flexibility in how we deal with situations -- allowing us to push against but also to lean into, to balance, and to pivot. Flexibility -- a wider span of options for interpreting and dealing with problems, converting them into opportunities. Flexibility doesn't mean being a kowtowing push-over. Flexibility requires enormous strength, but it is not a rigid sort of strength. It is the sort we need to be creative. Innovative. Agile. Responsive. Adaptive design is possible iff we have good sensors to detect points of relevance and shifts in context even in ambiguous shadowy emergent form; we have the imagination to acutely sense the meaning and import of the shifts; a broad compass of options to connect, meld and adapt to the emerging demands we collaborate in creating; and a mutable design base. That is, if we and our designs are flexible. Including flexible enough to dance well with others, to hold and be held, to balance, to give and to take focal attention, to complement. To extend what we can learn and do through analogy, and know when to move on. ;-)
Flexible enough, even to recognize when we are wrong, and do an about-face:
5. Changing your mind is a sign of intelligence. When Apple launched the iPhone, Jobs thought apps were a bad thing; you never knew what they were doing to your phone. Six months later, Jobs realized apps were the way to go – and suddenly his mantra became, “There’s an app for that.” -- Guy Kawasaki, What I Learned from Steve Jobs, October 8, 2011 (via KrisMeukens)
10/23/11: Picasso, on doing:
This post was inspired by a conceptual framework Dana is exploring around how we orient ourselves to situations. Flexibility was one of the sliders, if you will, or axes, if you won't. :-) Anyway, I was excited about the idea, and did my own riff on it 'cos it's fun to think with my fingers in on the act.
21 October 2011
From the Stream
- emtech MIT, October 2011
- John Seddon speaking at Lean & Kanban 2011
- 10 reflections on storytelling (via Tom Graves)
- Motivate8's visual "big picture" map (via Tom Graves)
- Japanese Brainstorming Technique, Michael Michalko, October 2011
- The Great Tech War Of 2012, Farhad Manjoo, October 19, 2011 (via KrisMeukens)
- Want To Upend An Entire Industry? Change Its Revenue Stream, Ryan Baum
- Don't Sell Yourself Short (pricing models), June 12, 2011
- Your Wall Has Ears, Geoff Nairn, October 19, 2011.
- Blackberry's Perfect Storm, Peter Cripps, October 21, 2011
- Does IT mean Information Technology? Or is it just a department?, Stuart Boardman, October 2011
- Now hiring: companies move away from outsourcing to control their IT destiny, Sean Gallagher, October 19, 2011
- Participatory Sensing, Jeffrey Goldman et al., May 2009
- The Psyche on Automatic: Amy Cuddy probes snap judgments, warm feelings, and how to become an “alpha dog.” Craig Lambert, November-December 2010
- Mastering Python 3.0: I/O, David Beazley
- Software Architect 2011 (and earlier) slides and photos, London October 2011
Even though you think I keep track of too many things that flit by, I so often regret not jotting down something that caught my eye... For example, I remember a snippet along the lines of "the moment we get what we want, we're on to wanting the next thing" and I can't remember where I read that... It's that Give a Mouse a Cookie thing I've mentioned before, but it occurred to me to wonder what's next on the iPhone? I mean, now we have Siri, what do we want next?
22 October 2011
"The problem is that leadership isn't about being the person with the answers, it's about being the person with the questions. You have to shift your mindset from answering questions to asking them, even when faced with questions." -- Dan Pritchett, I'm Still Here, September 24, 2011
Having the experience and smarts to ask useful questions factors.
Other great posts by Dan include:
- Mark Down Isn't a Discount, Dan Pritchett, October 22, 2011
- Conway's Law, Dan Pritchett, November 28, 2010 ... And a follow-up on the Rearden Deem blog: Aligning Corporate Culture with Software Culture, Dan Pritchett, Oct 25, 2011 (Wow, Rearden sure has a uniformly white male top management team...)
Of course, I've long been a fan of Dan's blog -- so glad he's writing again.
23 October 2011
Look Mom, I can Predict the Future!
Steve Jobs -- the autobiography by Walter Isaacson -- goes on sale tomorrow. So, it's easy to guess what you'll be reading on your iPad Kindle reader tomorrow! :-) You know, it's critical technology intelligence work that architects just have to do. Hardship though that is. ;-)
(Image source: book cover, Amazon)
Looking Back, and Seeing Forward
Tim O'Reilly's survey of his impressive contributions to the technology field provides a valuable view. Valuable for him, for looking back, one gains perspective on what one values. And valuable for us -- we tend to see people in terms of just the slice that intersects with the span of our attention, and it is heartening to see all that Tim has helped initiate and shaped with his influence and foresight.
As world changers go, Matt Flannery and Jessica Jackley, co-founders of Kiva, are way up there on my list of most notable, so I'm happy to see they received The Economist's "No Boundaries" Innovation Award!
24 October 2011
Well, now that I have your attention, let's rather talk about ballet. ;-) Clay Shirky says that "good collaboration is structured fighting." Steven Pressfield applies his US marine background to creativity with The War of Art. Oliver Burkeman calls him on this: "Does it really make sense to view creative work as a battle?" Ah, is this not structured fighting? Well, that's a macho way to position the way we approach the tensions in creativity and collaborative creation. Whatever floats your boat I guess... Though if you're reading here, you're at least open to the notion that collaboration can be more of a dance than a fight!
Which is what the "structured fighting" -- or the dance -- is much about, is it not? Pushing against. Finding and fixing the weakness in our conceptions and approaches because they are challenged, and advancing the state of our art and practice.
Outsourcing Jobs -- To Silicon
I've long been talking about the bigger jobs threat being "outsourcing to silicon and steel" but I see there is now a book for that: Race Against the Machine.
What did I say? So nice of you to inquire! Why, last year I wrote:
We, dear software/technology people, are obsoleting humans as fast as we possibly can, and so far the despair and vitriol from lost jobs is being focused on jobs going overseas and immigrants coming here. But perhaps one of these days we are going to wake up to a firestorm of wrath when people realize where so many jobs really went (absorbed into IT systems and taken up by I-do-what-I'm-told and no-worries-about-benefits-and-unions robots).
Ohhh, cut me some slack. It's just fun to have a reason to pull out some old pieces and give them an airing. ;-)
Gold in the Stream
This article about Steve Jobs' prototypical hero -- Edwin Land -- is wonderful: The Man Who Inspired Jobs, Christopher Bonanos, October 7, 2011
Jobs saw, and Jobs understood: “Not only was he one of the great inventors of our time but, more important, he saw the intersection of art and science and business and built an organization to reflect that.”
Prototypical hero? You know, the prototype on which Steve Jobs styled himself.
Speaking of gold -- I went for a glorious spirit lifting hike with my thinking partner today. It was way too lovely to work all day, and soon the leaves will be down and the long leafless winter will begin.
Dana took a break from his coding project, and had fun with the Photosynth app on his iPhone, though he hasn't uploaded anything yet. Oh, and we were very impressed with the new iPhone camera interface. The touch interface on the camera is so intuitive and powerful -- touching what you want to focus on, and using gestures to zoom -- that has to redefine cameras right there!
Aside: I can't believe that HP just threw in the towel on tablets. There's so much to be redefined, and Jobs wasn't the only person able to put will behind imagination! Besides, I just wish iTunes was as great as Apple's devices! Apple is awesome. But Apple doesn't have a total lock on awesome. What Apple had and hopefully has, what Google has, is the imagination to see what is possible and the will to make it so. Transplant CEOs have no founder's halo and... sadly... too seldom have the ability to see greatness for the company they lead.
As for books, I've been reading David Bohm's On Creativity. It was a surprise, and not. It is very much about how Bohm thought about systems.
More Sad News
Another sad loss!
Life runs so quickly through us!
I suppose this Trace is worthwhile to me, as I learn so much in reflection. But is it worth your time? Well, that you stop by here, is worth so much to me, for it bolsters and encourages me -- which is vital to the other work I do. Thank you.
25 October 2011
Orienting to the Fuzzy Front-End
I enjoyed Tom Graves "On innovation, foundations, scaffolding and Portakins" post (from May 2010). :-) Great points, artistically made!
And, it gives me an excuse to pull together some of my past posts on the theme, to add to the "generative soup" from which more rich conceptions and understandings arise. :-)
This month's post on "connecting impressions" and the conceptual architecture rewrite I've been working on are related, in so far as they deal with sketch prototyping. Other posts on pretendotyping and prototyping include:
I've made different, though complementary, use of the scaffolding metaphor:
And points about treading carefully around the delicate idea:
There's not much that is truly new under the sun, but as ideas swirl about us, we make new connections, forming new idea and product blends, with new -- hopefully more helpful -- presentations to the world. Sometimes the idea just needs a few more cycles of the earth round the sun, to find its time to take root -- allowing for some changes in technology and our receptivity.
Software faulted: Jaguar recalls 18,000 cars over cruise control software fault, Leo King, Computerworld UK, 24 October 11
"Although we’d always seen ourselves as rational creatures—this was our Promethean gift—it turns out that human reason is rather feeble, easily overwhelmed by ancient instincts and lazy biases. The mind is a deeply flawed machine." -- Jonah Lehrer, Is Self-Knowledge Overrated?, October 25, 2011
Well, we're counting our blessings and oh are we blessed with color! In this moment. Tomorrow, more rain. The leaves are coming down fast -- rain all last week, and breezy these past two days. So today we ducked out for a lunchtime ramble.
If art is about something else, don't you think there's a visual poetry in trees -- that maple yellow, giving back the sun's glow. It is at once about itself, and a life well lived. Again and again through its seasons, and its seeds.
Dana Bredemeyer, Steve Jobs and Grady Booch were all born within months of each other. It reminds our kids to appreciate Dana, knowing that Steve Jobs' kids don't have him physically present to them any more. I do so feel for Jobs and Pausch and other fathers, and mothers, who have to leave their kids. Cancer is cruel.
So, more world travel in the forecast. We're happy to be busy. And busy, being busy! Each of the four of us does way too much! But. Life is short. There's so little time to make a difference, to have our presence felt. To matter.
Booch on Twitter!
Perhaps I should explain my response:
"I think, therefore I am" neglects something important -- we are known, cared for, connected with, and in that community we are. So being heard, to me, is the important complement to "I tweet, therefore I am."
Aside from being a founding father of visual design in software (and systems), Grady Booch's on architecture series is setting a high bar for our field with ~8 minute chunks of architecture insight, conveyed with great artistry and wisdom. Oh, I am quite unabashedly the self-designated chief cheerleader in Grady's fan club. Of course I'm also the self-designated chief cheerleader in your fan club.
Well, go ahead. Follow the man. You make me look like I have no influence if you don't! And we don't want that, now do we? Oh. Right. I have no influence. Sigh.
10/25/11: Ok. I was wrong. In this case, happily! I do so much like Peter's quick sense of humor! And his kind words about my sketches led to more kind words, which took me absolutely by surprise! My sketches?! "Great" and "refreshing perspective" applied to my sketches is so totally a first and MADE MY DAY! Ok, my sketches are undeserving but it is wonderful that anyone would be generous-kind about them.
Funny. I was just being "good neighborly" to Grady, and I got good neighbored. :-)
26 October 2011
The End Depends upon ... the Beginning
Grady Booch's tweet reminds me of this slide and the notes from my "The Art of Drawing People In" tutorial at SATURN 2010:
The attribution that is cropped is to Sir Ken Robinson's TED Talk "Schools Kill Creativity." Sir Ken. And [language warning] Descartes.
I really did that whole thing -- the tree falls, software dinner party... the lot. :-)
Okayyyy. Perhaps you had to be there... Instead of introducing myself in the usual pedigree-establishing way, I then went on a quick survey of what I think about, that defines me. Like this:
Etc. I do things which on the surface seem dismissably simple, but of course you know that is my art. ;-)
Let me back up. These were the first 4 slides:
Yes, Daniel, I played the beginning of The Emperor's Club -- thank you.
The point? Beginnings of social constructions are not about "I" but about "we" -- about drawing people in, connecting them into the social co-creation process. We're taught by life and business that we need to establish credibility and sell, depending on the ethos and logos legs of Aristotle's triad of rhetoric and we so often neglect pathos and meaning. Moreover, it is so much more powerful to involve people. Sure, we can't create by committee, but we can "draw people in." Literally (drawing them in our pictures), concerning ourselves with the meaning of what we are creating for people. But also figuratively. Involving them in establishing that compelling meaning. Yes, we need a strong model of what it means to lead and follow, with followers willing to delegate ultimate judgments about design integrity to the leader and the leader willing to empower and enable followers to be superb, to create something amazing.
I should put that on Slideshare so you don't have to wear your finger out scrolling down? Sorry. ;-)
We tend to waive being better at beginnings off, thinking we're not often at the beginning of anything. But really, there are so many beginnings. So many fractal pools of leadership. And so many false starts, needing a new lease by gaining traction in meaning.
But don't listen to me! I'm bad at beginnings, and middles, and .... Still, you might want to consider how bad I'd be if I didn't rely on others to make the experience ever so much better! :-) We're in this together. The moment of dying, of passing out of this life, is alone. We may feel lonely along the way. But we are great only if others take us into their minds and hearts and lives. And so being "I" depends very much on others. On you!
Blah blah blah. Pictures. Dan Roam said so. You don't have to believe me. Although, I believed David Sibbet, back when (in the 90's, when my HP peers dragged me into visual facilitation and group graphics). And he has a new book -- Visual Teams.
And a Jolt Goes To...
"The most prevalent software architecture style is still "The Big Ball of Mud." Restructure101 for .Net from Headway Software is a superb weapon in our continuing battle against architecture entropy. Restructure101 parses an application and outputs a graphical dependency map of the components, classes and methods, in a format called Levelized Structure Maps (LSM). The LSMs provide a layered, drill-down/up structure that immediately reveals cyclic dependencies which Restructure101 presents as tangles."
-- Jolt Awards for Design, Architecture, and Planning Tools by The Jolt Judgest, October 26, 2011
Congratulations!27 October 2011
Good questions are more or, well, at least as important as good answers! And this is a great one:
What is not going to change?
It is one of those "ground under our feet" things (using a phrase Dana introduced into my lingo). Dana insists we put assertions on context maps. :-)
As we face climate change, it is an even better question! At least, the answer set may be quite small... Ever since our neighbor, an environmental physicist, told us that his predictive models have the mid-west becoming a desert, I've looked at our trees differently. It's sort of like being told your loved one has cancer and you see them differently. I do so love our forests! And our world has cancer!
So, change has a timeframe element. Our planning horizons tend to be short enough not to see the catastrophic effects of all the changes that combine and mass together, self-organizing into something new that is deadly to the old order, but the portal to a new order. The question is, will humans be part of the new bio-physical order that establishes itself in the wake of the destruction we unleash?
Hm. Cancer. Steve Jobs wished he'd sought treatment earlier. We're going to wish the same for the planet, if we don't become extraordinarily concerted in our determination to reduce our environmental footprint...
There is a yin and yang here, and our brains don't do very well seeing yin when they're looking for yang, and vice versa. But when we look for opportunity, what persists, what is stable, is important. And it is very much about values. What we value, that we want to "fight" -- or at least make sacrifices for, work hard for -- to retain. Not because we're resistant to change, but because whether we embrace it or not, over time, much will change.
I love good questions! Good questions open up scope for exploration. Good answers are important at some point, but a good answer to a misplaced question is a huge resource sink. It is, after all, what so many failed projects are made of. I, likely as not, got something entirely different out of Kris's good question than Kris had in mind. And yes! What is not going to change gives us points of leverage -- what do we already have in place, what do we already know how to do, what can we count on to be true, to be valued, to be meaningful? That ground under our feet.
Good questions serve to knock us out of our bias ruts, so we explore some -- just enough -- down different tangents. Because after the fact, we're going to value what we did (even if we only get to call it a learning experience) and self-justify our choice as the better path. (How many people, do you think, fall prey to Robert Frost's self-(d)ef(f)acing satire there? And then, the joke's on us.)
"The client hires you, so the client is the priority. But you can’t just build a building based on what the clients say, because their vision is based on what’s normal. How do you get out of the normal? You’ve got to question everything. Spend time with the user group. Glean all the information you can. And then throw it all away and begin to play."
-- Frank Gehry, Life's Work: Frank Gehry An Interview with Frank Gehry, Katherine Bell, HBR Magazine, November 2011
The web is so powerful it even fast-forwards us into the future -- I love reading future publications today, don't you? (Thanks for the Gehry pointer Daniel.)
We value questions. But questioning can be deflating. I drew the sketch some time ago, thinking about two kinds of questioning -- the friendly kind, and the unfriendly kind.
"But researchers reporting in Plos One say that unequal upper and lower beak lengths and spongy, plate-like bone structure protect the birds' brains.
More quantitative studies are necessary to answer this interesting problem, which would aid in applying the bio-mechanism to human protective device design and even to some industry design."
-- Jason Palmer, How woodpeckers avoid head injury, BBC News, October 27, 2011
Conceptual architecture isn't just about defining "the language" of our system. Oh yes, it is that. Concepts are understood within frames, and conceptual architecture creates a framework of defined organizing concepts and determines their relationships to one another. In that sense, it establishes the meaning not just of primary elements of the system, but also of the system. But we're inventing and evolving the notions of a dynamic system. Which is to say we're not just conceiving the structures of the system, but how it will function, and how it's major elements will function, and function together, not just to deliver the system capabilities it offers to users, but also the mechanisms by which it sustains and protects itself. You know. How its not going to get brain damage when it bashes its head against walls. Ok. Not that exactly. Um...
In Conceptual Architecture we're just sketching out the design of key mechanisms. Early on we do this to give us that "just enough" sense of what to go after, to understand more thoroughly (yes, we get to code here). Later, we can reflect design precision in Logical Architecture, but we use Conceptual Architecture as a "grok guide" to the system and don't mire it down in detail -- we're leveraging that "amplification through simplification" brain lever.
"But Bach had that way of using passing tones so that you could meditate on the passing-ness of things, what it is to pass, to move on, to leave beauties behind … of labeling the labels with meaning, breathing life back into the most basic, even the most unassuming, words." -- Jeremy Denk
10/28/11: You're wondering about my sketch? How kind you are! Well, the parafoil (for soft landings) also doubles as a defense shield. Archman is monitoring what is going on (the binocs) and putting goodies (pickles perhaps) into a handy canister... up there... you know... in the cloud. What, you didn't see all that? Hmpf! I suppose it would help if you could read my scrawl. Oh well.
More mechanisms (using biomechanisms and with biomimicry potential):
- Secret Codes in Bacteria, Bruce Schneier, October 27, 2011
- Why do we sleep? Paul King's answer on quora (via Maria Popova)
In software please? My shelf is buckling under patterns books; isn't yours? It's not the solved problems we need answers for, it's the unsolved or poorly solved problems. We keep pushing the envelope of complexity, and, well, Nature, for example, has been there, done that... often more elegantly than we conceive without Her..
10/28/11: I overcame a severe case of denial and got reading glasses (just 1.0 but wow, I can read expiry dates again ;-) so hopefully my sketch 'n scrawl will get ever so much better! :-)
Positive Emotions: How to Get Them and What They're Good For
Images below are from Barbara Fredrickson's keynote titled Why Care about Positive Emotions?.
To be open to and create:
(I like Martin Howitt's current Twitter image. It's so open to possibilities. :-)
Is somewhere else...
So, I see that #longread has become a meme. Applicable to this Trace.
Does it say something remarkably good or devastatingly bad that you're reading here? You still read -- longreads? I'm so glad some from the longread era survive... for now. But, ... it's ... good exercise for the scroll finger... even if you use the calendar to jump to today... ;-)
A network of minds. And I have you asking if that's a good thing? Well, at least I served one useful purpose!
In related news, this just out: Reinventing Discovery: The New Era of Networked Science, Michael Nielsen, 2011
28 October 2011
And on that note, this from flowingdata:
Take a look at the comments. I think this caption fits:
And the chaser:
Oh, you, I'm a straight no chaser kind of Q<=
I just, you know, copy down stuff Serendipity says... I think this is what Storify was made for, but I don't feel like resurrecting my password... Besides, no-one will find this at the bottom of zifty pages of October entries. ;-)
Well, TMI has also become a meme (more than just a pop flash, but encoding and diffusing a too-busy-to-care mindset), but the most colorful use was when a kid was talking about the gunk between her toes after pointe class and another told her that was "too much information." I suppose she was channeling her mother. Whaaaat? TMI? See, it's useful around me! ;-)
#longread... TMI... there's fast-massing cultural antibodies to everything I represent.
Ok. for a more gentle (don't be fooled; disarming is more powerful than punching) take on assumptions, may I recommend Getting Past ‘But’: Finding Opportunity and Making It Happen? #longread
To a man who influenced the course of computing history: John McCarthy
29 October 2011
We're pretty taken with emergent, although my goodness, think of the rattletraps we put good men into orbit about this earth in! In the light of today's technology, it seems quaint that we put brave men in rockets and blasted them into the sky. Human intentionality achieves some gosh-darn amazing things! As years create distance and technology advances, they get even more amazing -- for their remarkable daring and audacity -- in hindsight. So, as we give "systems thinking" more sway, acknowledging emergence and paying more heed to our puny handle on complexity, it is just as well to also tip our hats to human audacity, courage, and, even, be rather grateful to what we have bumbled our way to accomplishing! :-) And then get on with being appalled and ashamed and motivated to get to grips with both more human applications of technology and ethics in practicable terms, because that genie is busy!
Jim Coplien writes that he was urged and inspired by Richard Gabriel (my brain always pulls Peter--who's reimagining himself to awesome effect--in conjunction with Gabriel!) to write each email as a piece of literature. I've never had such a rule. (I'm not a rulesy person. Although, man, I'm so with Steve Jobs on that simplify life all the way down to not having to think about what to wear.) But I do try to "show up" with only words to present me. So it was that I subject lined an email "intentional and emergent" and began the email "well, not exactly a choice as emergence has a mind of its own." TMI? No. I mean consider what I just did there! I think art is a medium through which we sense meaning before we science it. Emergence. A mind of its own.
"All models are wrong."
"This sentence is false."
If we can't embrace paradox and mate analogies we're sunk!
I knew my tendency to mixed metaphor would serve me well... some day...
I enjoyed and got a lot from Jurgen Appelo's Complexity Thinking slideset. He has a bunch of non-self-reflective peacocks falling over themselves to declare they're magpies. It's a good thing I hadn't seen that or I would have had to head September differently! And I'd have had to change this post in April and this in May. Now remind me never to call my mind a magpie again. Of course you're thinking I'd do well to be less self-reflective, less of a magpie, and less of a bragging peacock.
As for genies, remember this:
Well, scary only if we don't figure out how to cope with the genie we've unleashed into the world. When I was enmeshed in a bout of self-doubt at one point, I expostulated to myself "You know what? I think the Greek myth got it wrong. I don't think it was Hope at the bottom of Pandora's Box. It was DENIAL." With that thought, Hope returned. [Hey, it's not everyone who has the chutzpa to go up against Greek myths. That has to be worth something. ;-) ]
I know. Sometimes my writing can just be too artsy for anyone. Me included!
Sigh. One day someone will declare that art goes with architecture and its actually ok that someone writes about architecture artistically, sensitively, sensibly, presciently and intelligently.
That someone will probably be me. Writing about someone else.
Sigh^3. Sigh cubed? You know, a sigh that's wide and deep and high.
We're all exhibitionist peacocks. Or we could not be leaders. For all our introversion, when we feel compelled to lead, in that arena anyway, we will stand out. Passion displays itself. Even Mother Theresa stood out. And we're all magpies or we would have little to connect. It's all complex because there are no closed systems. But we gain traction by acting as though we can isolate and regard systems as closed, and we proceed as though we can impose our intentionality upon the interactions among their parts under the assumption of only conscribed interactions with the environment. And we get lucky often enough to put a man on the moon and to have Siri serve us dinner suggestions. And we mess up. Big time. And our messes mount and mount. And then some new order emerges. Because emergence has a mind of its own.
And so we cycle.
Shall I do that hope denial thing again? Oh, you don't like the idea of denial masquerading as hope at the bottom of Pandora's Box?
- Minecraft awarded GameCity videogame arts prize, Leo Kelion, 29 October 2011
My kids love Minecraft. Ryan was torn between being Steve Jobs or a creeper for Halloween. Sara either wants to go as a robot or a porcelain doll. She's zeroing in on the doll. Why? She can't think of anything that will scare her friends more! :-) (Oh, no, it's nothing like Nightmare on Elm Street -- not that I would know. It's just that to girls who want to be quantum physicists, porcelain dolls are scary cute...)
It's... no wonder Plato's Cave has renewed interest...
30 October 2011
I killed yesterday's post. But with this tweet
I had to resurrect it. "Art is a delivery system for worldviews" is a great insight! But the point I was making, is that a great artist pulls together what is "in the air" in new ways, so we begin to make sense of it, through the fresh connections and striking presentation of the artist. Ryan (who grabbed our copy of Steve Jobs to be the first in the family to read it) tells us that Steve Jobs said (or quoted) "The great steal. Others merely copy." Great artists make ideas their own, by doing something new with them. They see what is ahead, because they see that "today is pregnant with tomorrow" and they are among the first to birth what will become. Which is to say, advancing what is possible within a worldview. Sometimes. And being part of what replaces a worldview, making new connections between pieces from which a new worldview is shaped. Or simply offering a tremulous trembling insight about our frail human condition -- a condition we're still far from fully understanding, even if the likes of Dickens knew how to at once tenderly embrace and, in holding up a mirror to ourselves, chastise it.
The great artist's mastery of their medium begs our attention, but if we didn't feel that we could conspire with the artist to find meaning in their work, our attention would simply remain at the level of appreciation for the craft. Because the artist, often simply because they are a conduit for Serendipity (but a conduit Serendipity favors because they have rich thought lives), has created something that unfolds meaning in teasing glimpses and revelations. It invites us to join the artist in an adventure, rewarding our brains more subtly than an action video game.
'In September of 1968, in what he jokingly termed “E. Gorey’s Great Simple Theory About Art,” Gorey wrote these Yodaesque words: "This is the theory… that anything that is art… is presumably about some certain thing, but is really always about something else, and it’s no good having one without the other, because if you just have the something it is boring and if you just have the something else it’s irritating.”' -- Maria Popova in BrainPickings
"The skill of writing is to create a context in which other people can think." -- Edwin Schlossberg (quoted in "Books That Have Shaped How I Think" by Tim O'Reilly)
Creating a context, yes. Rich with ground from which to build meaning, to advance conception, and hence what we can make possible in the world.
'Cezanne believed that light was only the beginning of seeing. “The eye is not enough,” he declared. “One needs to think as well.” Cezanne’s epiphany was that our impressions require interpretation; to look is to create what you see.' -- abstraction becomes the new realism, Jonah Lehrer
Advancing context, you might say. So delivery in the birthing sense. Not just in the conveyance sense.
These are interesting:
- "Information Is Cheap, Meaning Is Expensive", George Dyson, 17.10.2011
- Lesson of Cezanne & The Upcoming Disruption, Nilofer Merchant, October 7, 2011
As is this:
Clarke's Laws (wikipedia)
- Clarke's Second Law: The only way of discovering the limits of the possible is to venture a little way past them into the impossible.
- "Hazards of Prophecy: The Failure of Imagination" in Profiles of the Future (1962)
- Clarke's Third Law: Any sufficiently advanced technology is indistinguishable from magic.
- Profiles of the Future (revised edition, 1973)
- Clarke's Law of Revolutionary Ideas: Every revolutionary idea — in science, politics, art, or whatever — seems to evoke three stages of reaction. They may be summed up by the phrases:
(1) "It's completely impossible — don't waste my time";
(2) "It's possible, but it's not worth doing";
(3) "I said it was a good idea all along."
- All truth passes through three stages.
First it is ridiculed.
Second it is violently opposed.
And third it is accepted as self-evident.
- As quoted in Seeds of Peace : A Catalogue of Quotations (1986) by Jeanne Larson, Madge Micheels-Cyrus, p. 244
“After two decades in animation,” he said, “I was spontaneity-starved.” ... The film had scope and humor and gusto, and you could feel a bounding imagination at work." -- Second Act Twist, Tad Friend, October 17, 2011
"I don’t think we should lose sight of the fact that architecture is more than just building, but we have to rethink the relationship of architecture to building in such a way that the building is not considered a defective mode of a discourse." -- Dialogue: David Adjaye, Jorge Otero-Pailos, & Nikolaus Hirsch, On Architecture and Authorship: A Conversation, 10.24.11
That conversation is really exciting and inspiring, its points of controversy and dialog relating well to software and enterprise (and other systems) architecture.
And this is a good reminder to architects and other leaders and parents alike:
- you're always teaching (via Danah Boyd)
Dennis Ritchie and Other Giants
Tim O'Reilly stepped up to the leadership plate for our field and declared today Dennis Ritchie Day. Gratitude is a grace that blesses the grateful with mind opening, and for Unix and C, we have a lot to be grateful to Dennis Ritchie for -- directly, and indirectly through everything they influenced and enabled. There were giants indeed! And many of the giants -- that would include you -- of today, build on what Dennis Ritchie built and made possible.
It occurs to me, that an unspoken giant is Laurene Jobs -- a giant who is dearly loved, adored and personally admired is all the more able. And Steve Jobs was loved by many, but with the greatest forbearance and enablement by those closest to him.
Too many images this month. It's time for November! We'll just skip Halloween, ok?
No? Well, then...
I guess this all-on-one page business is too much... but... it's a journal -- a Trace of a journey, and journeys happen with adventures threaded and interwoven, to be sure, but still across time.
But... those who are put off by load time would be put off by the words -- the very muchness of them and the... how-should-we-characterize-it content of it... So... should I worry?
Besides... it's kind of like falling with Alice... down down down... then... finding you're too big for the door... then... etc... then...
`Would you tell me, please, which way I ought to go from here?'
`That depends a good deal on where you want to get to,' said the Cat.
`I don't much care where--' said Alice.
`Then it doesn't matter which way you go,' said the Cat.
`--so long as I get SOMEWHERE,' Alice added as an explanation.
`Oh, you're sure to do that,' said the Cat, `if you only walk long enough.'"
"`But I don't want to go among mad people,' Alice remarked.
`Oh, you can't help that,' said the Cat: `we're all mad here. I'm mad. You're mad.'
`How do you know I'm mad?' said Alice.
`You must be,' said the Cat, `or you wouldn't have come here.'
Alice didn't think that proved it at all; however, she went on `And how do you know that you're mad?'
`To begin with,' said the Cat, `a dog's not mad. You grant that?'
`I suppose so,' said Alice.
`Well, then,' the Cat went on, `you see, a dog growls when it's angry, and wags its tail when it's pleased. Now I growl when I'm pleased, and wag my tail when I'm angry. Therefore I'm mad.'"
-- Lewis Carroll, Alice's Adventures in Wonderland
If I don't warrant an exception to rules, why... I shouldn't be part of your flexibility regimen. ;-) How often have you seen other people just quote the I don't much care where--' said Alice. `Then it doesn't matter which way you go,' said the Cat stuff? Ha!
But if you like images that tell a story that makes you hunger, even if they are slow to load... try this. Seriously wow. And we've been there! I've seen a number of those scenes! Makes me want to go back! It so feels just like that! Scenery that rebirths spirits!
Isn't that what great art should most do -- make us hunger to be more ourselves? Taking us beyond ourselves. Inspiring us. Challenging us. Inviting us to care, to understand, to empathize, to marvel, to yield, to be vulnerable, to wrestle with our demons and doubts, to listen to our angels, to discern, to advance our limits, to be great. By doing, by creating something meaningful (and relationships count as the most meaningful), by living our own very unique beautiful life experienced as only the singular "I" of our own self can, we create our best work. It has its manifestations in the world, but the fullest expression of it is our own mind-spirit. And so great art that compels us to discover, to build, to learn, to connect, to discover truth, is art that makes us hunger to be more. Hence more ourselves. That's not what it is about, not what it tries to do. But that is the effect.
31 October 2011
Going to school this morning, Ryan was telling stories from Steve Jobs -- dressed as Steve Jobs, for his school Halloween parade. Taken with the stories, Sara decided she'll read that when she has to read a biography later in the school year. Ryan mentioned the "Think Different" Apple commercial (which he and I watched many moons ago), and I remarked that Steve Jobs is going to make being different the new cool. Sara responded to the effect that while everyone is the same in all their effort to be different, she would just stay the same, and hence she'd be different.
But think about it. I've been combining "art and technology" here for nearly 6 years, and now it has the chance to be "cool." To ride the wave of "different"... ;-) I know my sketches are schlocky... It's a too full -- not Spartan -- grab-bag of brain treats... of the really hard to chew kind...
Ok. Whatever else it is, I'll say this: Those who I know read here with some frequency are architects I much admire. I mean, aside from the strength of character it takes to read here. :-) They are interesting, deep thinking and effective people -- and optimistic too. :-) And they lead and empower me, and I am so grateful! And because of them I am optimistic too. Ok, wrong holiday. Thanksgiving is next month. Halloween. Oh yeah. Grateful praise .... you don't get much more scary than that. ;-) So, two holidays done -- I get to skip ahead to Christmas, right?
Hey, at some point, reading here might become, like, a badge that signifies "resilient -- tenaciously dedicated to a tough mind flexibility regimen; ...; curious -- incorrigibly so, ..." You know, putting my Trace on your blogroll would be like putting an Apple sticker on your car... A "think different" marker that doubles as an intellectual "iron man" award...
> @ <
Trick or Treat?
- The LMAX Architecture, Martin Fowler, 12 July 2011
- Released: ReliabilityPatterns – a circuit breaker implementation for .NET, Tatham Oddie
Not sure how to classify this: