Friday, 25 November 2011

It's not Email vs Social Media, it's tools for jobs


This post http://t.co/pTgTLA79 got me thinking and talking to a couple of customers this morning. In it:
  • Mark Zuckerberg talks about replacing email with something more immediate and clearly, for him, Facebook's new messaging service is it. 
  • The Microsoft guy then pours cold water on it - understandable given the amount of revenue Microsoft get from licensing Outlook, Exchange and the servers that power corporate mail all over the world. 
  • The (ex IBM) guy who invented MIME points to (IBM's) LotusLive, which is lovely and largely irrelevant to most corporates for whom Notes is a tainted if not poisonous chalice of end user computing proliferation.


I totally buy a few things in this article. I think the move towards conversations in a shared, protected environment is going to continue as users see the benefits of being passive participants in conversations; the 'ambient information transfer' argument. Users will also become more comfortable in such an environment, meaning they'll contribute more.

The argumemt, "email vs social media" strikes me as a bit bizarre. It's like saying, "Is a letter better than going to the pub with your mates? Or having a meeting?". 

Well, it just depends what you want to achieve.

For a start, the idea that email is somehow going away doesn't fly - but there are plenty of things done with email today which it isn't that great at. Products will come along that replace things users currently do with email.

This is already happening; before Twitter, how else would you share an interesting link? Blog it? Email out your blog?

And it will happen in other areas.

For example, it gives users control of their workflow; often users will create filing structures within email to allow them to rapidly locate information. I did it myself until around 2008, at which point search and filtering got good enough for me to find what I wanted without filing it. Normally, if I can remember the context of an email then it can be retrieved, even if if means searching on a name, then spinning through a couple of pages. Better than being a filing clerk.

Another example: email is a singularly terrible way to talk about anything to any volume of people. If you're not on the list, you don't know it's happening. If it's in your inbox and nowhere else, it's impossible to share the knowledge. Replies get crossed all the time ("with respect to Dave's point in his third para"). 

A decent collaborative enterprise platform kills both of these uses of email. But it will have to ensure that:
  • Collaboration has to be based around work; it has to be in shared data & communication environment rather than simply a messaging environment - it has to be real-time, capable of dealing with multiple conversation threads
  • Communities of users are able to freely form and create new knowledge from existing data, so the user community becomes the filing system
  • Enterprise search is powerful enough to render filing a waste of time


Email remains pretty good for private communications - personal or enterprise confidential items you don't want to broadcast. (Of course, it's sensible to assume that everything gets leaked eventually.)

But in all other areas, it just needs the right product to topple it.

Friday, 11 November 2011

Lessons from teaching myself to run again


I've been running on and off for nearly 30 years - but more off than on. My whole family have had been runners at some point, undoubtedly due to Dad who's still a dedicated runner now as he gets towards 70. In my teens I was quite into it and could do an 11 mile run over pretty serious terrain at the drop of a hat.

So I've always been working off a base of assumed competence - like a lot of organisations. No-one ever taught me to run - certainly not Dad, who could 'just do it'. As so often seen in other disciplines, if it comes easy to you, you've probably not gone through a learning process to become expert in it, and so you're not best placed to teach excellence in it.

But my running has been more off than on. Allowing for a mis-spent late adolescence and early adulthood, it's been off and on for two reasons; lack of pace and recurrent injuries.

Over the last few months we've finally nailed the right asthma medication for me. Of course, my asthma was fine; I was running every other day, usually comfortably. But the nurse said, 'all very well but you're running to your limits', i.e., without the right meds there was no knowing what my potential was.

The other breakthrough was finally teaching myself to be a forefront striker; I now land and push off on the balls of my feet rather than being a heavy heel striker. Heel striking is all very well but you have to roll forward onto your forefoot to push off which increases the risk of pronation/supination, but also seems to massively increase the shock to your system with every step. Every time you land on your heel it's like a car crash; the force travels straight up your leg.

So forefront striking has really reduced the injuries. I've been running for a couple of months now without injury.

The final key is not to overdo it. Instead of thinking, 'yeah, 11 miles used to be no problem…', the right way for me has been to limit it to half an hour every other day of walk/run, building up to complete sessions of running. The aim is to get the muscles and mental capability developed gently. 

I think these lessons have relevance for organisations too.
  1. What medication does the organisation need? Is there a tool or process that will delimit the capacity of a process?
  2. Can we make simple changes to reduce friction and risk at the point where work is actually done?
  3. Can we make lots of small steps to increase capacity, rather than a single, high-risk change?

Saturday, 5 November 2011

How we made the 'minimum viable product approach' work



Seth Godin describes why the minimum viable product approach doesn't always work; you can't go through the try/fail loop without support of a community of users.

This is borne out by our experience at Sabisu. In our case, what we had to do was build the community before the product; find something that meets a community's needs, pitch the idea and perhaps a prototype and construct a community to support you.

That community is essential for a few reasons:
  1. It's validation - plenty of people will tell you your crazy, so it's nice to have some people around of a different opinion.
  2. It's a feedback network - so you can be sure 
    1. You start by addressing a real problem rather than something vague and perceived.
    2. You continue to address real problems instead of going off at a tangent.
  3. If it's a great idea then users may support you in other ways (expertise, for example) in return for early adopter benefit.
  4. You get case study opportunities very early, as opposed to launching then waiting a year.

Friday, 28 October 2011

Why call the blog 'One Less Cut in a Thousand' ?

Is there anything more dully self important than writing a post on why you called your blog a certain thing?

However, I was asked, so here's the answer.

You may have heard the expression 'death by a thousand cuts'; it derives from a barbaric method of execution used until 1905 in China but generally the phrase is used to describe 'creeping normalcy', or negative change which happens slowly in unnoticed increments.

My career so far has been spent on the inside of enterprises of all size. In my experience the most inspiring visions, innovative ideas and game changing technology fails to be adopted not because of a considered architectural or strategic decision to reject it but because of 'creeping normalcy'; the organisation fails to improve because there is inertia and apathy. Great ideas simply fail to find fertile ground - or, worse, we drift, apathetic, into creating terrible, unethical organisations.

Likewise, improvement projects fail not because of a considered decision to close them down - in fact, this is a successful scenario - but because lots of small negative actions chip away at the business case, or reduce uptake, or invalidate assumptions.

There are thousands of negative actions that reduce an organisations capacity to change, to improve. When I started the blog, my aim was simple; every post should have something positive a reader can take away that helps them to change their organisation; something positive to counteract the thousands of possible negatives.

Every post should be one less cut in a thousand. Perhaps one day we'll never make the first cut at all.

Monday, 24 October 2011

Real world enterprise social networks; why quality trumps quantity


I think the uptake of social media within the enterprise is really interesting. There are few independent studies (and lots of sponsored ones) so it was interesting to see the adoption of an experimental, completely unsupported and unpublicised enterprise Yammer implementation – it would be inappropriate to say when, where or how.

What happened was that following an initial burst of activity where the recruitment of new users went viral, it then went silent for about 9 months. There were very few messages, very few new users. Then the activity picked up again of its own accord; users would put up more messages and the join activity started to rapidly increase.

I must admit that having read about the Twitter new user ‘9 month bounce’ last year I was expecting it – basically, users go quiet after joining and then come back. This resonated because it’s exactly what I did with Twitter.

Of course, Twitter’s new user activity is obscured by the continuous near vertical (but linear, apparently) user number growth. In a limited environment any slow down in growth is much easier to see.

So why the dip? Why the pickup?

I think the dip was down to a few things;
  1. Public nature of posting – people were a bit shy/wary.
  2. Users unsure what to post, or why.
  3. Insufficient followers – you post more when you have followers. Probably it's something to do with 9mths being the amount of time it takes to get enough followers as a new user in order for it to be worth tweeting.
  4. Growth by spam; it was pointed out that on joining the platform spammed you to invite new users. These users might join but weren’t necessarily engaged.


Why the pickup?

Just my thoughts but:
  1. It took 9 months to find the right users; a couple of users popped up out of nowhere and began posting on a regular basis, making the community active and therefore:
    1. Useful
    2. Accepted behaviour (cf.Gladwell’s ‘The Tipping Point’, particularly sections about ‘permission’.)
  2. Personal social media penetration; users are relaxing about posting stuff. Without doubt enterprise users will get burned by posting inappropriate comments – and learn from it.
  3. Viable network size; one of the reasons that Yammer wants to drive network growth (see 4 above) is that they’re aware that a big network is more likely to be robust and active, hence useful, hence endorsed by the enterprise.

Now, fascinatingly the relationship between the size of a network in the early stages of development and it’s activity was almost (could be?) mathematical: they correlated precisely. It was almost as if every single new post added a user, even though that user wasn’t addressed.

In fact, the number of messages was far below what we might expect; perhaps the quality of the network and network activity didn’t match the growth; perhaps quality drives quality, as growth drives growth.

The take away for me is that to build a quality network is more difficult and rewarding building a big one.

Friday, 14 October 2011

Where I've been going wrong with IT strategy


I love strategy. I love the idea. But it's only recently that I've come to realise where I've been going wrong all these years.

I've yet to see a company who has a non-IT core business define and execute a successful IT strategy.

By 'define' I mean clearly describe the vision, the goals/objectives,

By 'execute', I mean delivery of the strategic platforms and the solutions that sit on them; effective communication; mass adoption; proven benefit; everything defined delivered.
  
Sure, we see organisations deliver components of a successful strategy but never the whole hog.

Why?

Well, I don't think strategies are sufficiently agile. Certainly small, modern, agile enterprises seem to express themselves in terms that make big, mature, static organisations wince. Which is a bit strange seeing as they're both reliant on the same species to function.

Often, strategies don't actually mean anything. As soon as someone says the word, 'strategy', it seems to be the green light for academic techniques that don't actually resolve in anything a user would recognise. It's like even the communication of the strategy is FUD driven and scared of someone deconstructing the buzzwords.

Because they don't mean anything, they don't engage users. The guys on the ground floor don't care. The guys in the middle are busy being squeezed by the guys at the top and the guys on the ground floor. Vague, long range planning is your enemy. It doesn't translate into the real world.

Often, an the people within an organisation doesn't know what it cares about; what it stands for; what it's principles are. Think this is outlandish? Check out John Oliver, Grow Your Own Heroes.

So, strategists:
1. Make your strategy agile.
2. Eliminate buzzwords, be simple.
3. Engage users with tangible objectives.
4. Know what you care about.

As I said at the start, I love strategic thinking and know what a well chosen strategy can do. It's only now  I'm at Sabisu that the importance of it - and the way to make it really work - is becoming a little clearer to me. Here at Sabisu we're working up a bit of a guide on how we think strategy should be done and over the next few weeks we'll get round to putting up some ideas.

Friday, 7 October 2011

Why self-service BI is falling short

Having spent some time over the last couple of weeks introducing companies to Sabisu, it's clear that 'self-service' is seen as a big win - though the precise definition of what that means differs.

Every enterprise appears to have the same problems; complex business processes, a wide variety of often proprietary data sources, heavy use of IT expertise in integrating systems so that end users have the data where they want it. These problems result in duplication of data and a dependency on IT that destroys agility. How can you respond to incipient situations when you have to wait on an IT release schedule?

Everyone we've spoken to sees the answer in shifting capability out to the masses; empowering the end users by providing pre-configured reports or cubes where an end user can build report on-demand with recent data. Limited menu, often reheated data, but hot all the same - like fast food. This works to a degree, particularly in a slow moving environment, because you can schedule report generation or cube maintenance and the data will be 'recent enough'.

But it's not quite self-service. There's a long tail of requirements that the pre-built cubes aren't going to satisfy; all the IT department can do is invest more time, money and effort in building ever larger cubes as each new requirement is uncovered. Before you know it the costs associated with BI are spiralling, so the likelihood of tackling non-relational data or proprietary format manufacturing data is slight.

The end-user experience is often not great. User queries get invalidated as cubes are rebuilt (usually for IT reasons). Reports generated by different users don't tie up because different fields from different systems are confused - and implementing a data dictionary is often not viable, even if you can get cross-department agreement on a single version of the 'truth'. End users have to become proficient in what is, in effect, a development environment for building reports.

All this points to partial adoption at best on the grounds that the service just isn't great. It's got to be better.

I'm looking for:

  • End user driven platforms, so no IT involvement - particularly none that could invalidate a trusted, end-user designed report
  • Genuine end-user driven data access without needing to train everyone first - it's got to be built around modern UX principles
  • Real-time, or as near as makes no difference
  • Direct access to source data - if we're going to have a debate about the data, let's at least be clear on what we're looking at
  • Some way to action BI; curate it for a community, make it actionable, collaborate on it
  • Controllable expense - there's no way the enterprise should be penalised with increased expense or complexity for a user wanting to extract or share data

Friday, 30 September 2011

The perils of over-engineering


A couple of weeks ago, as referred to obliquely in our Sabisu blog we upgraded our development and QA environments. Our hand was forced because we lost the development environment irretrievably due to over-engineering in the earliest days of the company; something that couldn't possibly harm us in the future did.

The lessons learned can be summarised thus;

1. If you do experience an exponential increase in activity, you can probably find the funding for more capable infrastructure.

2. Advanced technology steals time. Use the simplest technology you can find that will do the job.

3. Only use technology you understand and have a track record in configuring successfully. If you're an infrastructure guy/company, fine - if not, stick to what you're good at and outsource the rest.

Anyway, here's the story.

When you set up a company, particularly a boot-strapping start-up, you are short of everything apart from ambition - that's what makes it fun. The two things you're critically short of are money and time but you can't help planning for the 'hockey-stick' eventuality; a graph that suddenly swings from gradual linear to exponential growth.

So we originally built two hardware platforms in-house on high grade consumer kit rather than industry standard hardware. (This is a problem in itself because high grade consumer kit often has the features of professional hardware but it's less expensive - and that cost saving appears as a time cost as, inevitably, the hardware is less reliable.)

Then we created three virtual environments on each machine; mirrored VMs for Development, QA and fileserver/DC. Each virtual environment was snapshotted in its entirety on a schedule. Data was backed up to a spare hard drive in one machine and the source code was backed up off-site to Amazon Web Services.

(Worth saying that the live system has always been hosted to the highest corporate standards by a third party infrastructure specialist.) 

In the worst case scenario, we'd lose a hardware platform but we'd never lose any data.

As a little start-up we didn't have the time or people to define and implement the kind of processes you need to manage this kind of environment. So, the VM paused mid-snapshot one day in something of a dither having ran out of room mid-snapshot.

We didn't expect this; we expected each snapshot to be a few GB at most, because we didn't understand snapshots fully or have the processes in place to monitor storage usage. We had no processes defined to allow us to bring back a failed VM. It was catastrophic.

As a backup, the snapshot was useless; it took several hours of crunching to get the most recent snapshot back and loaded to figure out that it was too old. So the VM was effectively bricked because the data was old - so the multiple platforms were irrelevant.

The VMs were a waste of time. We would have been better with any alternative. As it happens we reconstructed the source code from the online Amazon Web Services backups and each developer's work in progress version. We rebuilt the hardware platforms as decent standalone Development servers, which we understand.

Friday, 23 September 2011

How we set up Trac for agile development


Following this post, I had a couple of queries about how we set up Trac for managing Sabisu. Hope this helps.

Where does Trac live?
We implemented it onto our Dev server, so we have control over the environment. It could live anywhere but just seemed to make sense. We expose certain elements of our dev environment to the internet through the Sabisu platform, so that's how we get remote access.

How do you divide your product into Trac?
Initially we divided the platform up at the top level, so we had a separate instance of Trac for each major application; platform itself, Chronos time logging, Forms and so on. However, this made it difficult to see ticket allocations across the team so now everything is in a single project.

We then split the platform using into each Component, e.g., Chat, Chronos, Communities Functions, Communities View etc, through to Widget Editor. Every component is assigned to a different member of the team to make default work allocation easy.

Milestones
We use Milestones a lot. Every milestone corresponds to a release and we allocate each a codename because it's easier to say, "We're moving ticket 192 to the Nestor release" and have everyone know what you mean. We try to get a balance of about 20-30 tickets per release and we release new revisions on a weekly basis.

Priorities
Our priorities, running from highest to lowest; blocker, critical, major, minor, trivial, cosmetic. If part of the system is inaccessible, or we can't complete a test, that's the highest priority. At the other end of the scale a cosmetic indicates something that's genuinely cosmetic - if it affects UX in anyway it's major or minor.

Severities
Our severities, running from most to least; Multiple Customer Outage, Customer Outage, Customer Inconvenience, Irritant, Risky to leave, One for later.
For any severity lower than Customer Outage there's generally a work around available. 'Risky to leave' tends to be architectural or infrastructure work but no reason why it should be limited to that.
Also a ticket regarded an 'outage' mightn't be a Blocker; it could be that the functionality is accessible but fails.

Ticket Types
Couple of interesting categories: Defect, Enhancement, Live Incident, PoC.
Of course, Live Incident and Defect are both important categories. However, in conjunction with the Severity and Priority we can properly direct our efforts; a Live Incident could be relatively minor and addressed at a later date without significant impact.
The 'PoC' type is used to denote 'proof of concept' work. This is usually pure R&D work that needs productisation at a later date, usually through a series of Enhancement tickets.

The Priority, Severity and Ticket Types fields work together; the most serious ticket is a Live Incident causing Multiple Customer Outage preventing access to part of the system (a Blocker).

Resolutions
Very dull; Fixed, Invalid, Wontfix, Duplicate, Worksforme, Unable to Replicate.
I hate the Worksforme resolution because it's not a resolution of any kind…but tolerate it because sometimes you just can't reproduce a user reported defect.

We don't use Versions and don't link Trac to SVN, though it's perfectly feasible - it's just not something we've needed to do.

Comments or thoughts welcome.

Friday, 16 September 2011

Five rules for organising an agile, timeboxed product dev team


Over at the official Sabisu blog we outline some guidelines we use to manage the development of the product. I thought it might be good to expand why we established them , what they really mean in practice, and what they give us.

1. Work to the next release. It’s always next Monday.

The 'release early' philosophy is well established in agile software development; the sooner you can expose your work to your customers and react to their needs the better. Weekly releases allow us to turn around requests, incorporate customer feedback and incrementally improve the user experience.

Early in the lifecycle we tried to go to weekly releases but such a rapid release cycle caused a dip in quality as we tried to work in complex back end code too fast - now we have a mature platform and processes, the weekly releases are more logical. We take care not to expose users to complex functionality until it's usable, but the functionality is being gradually constructed behind the scenes as we go.

2. Incidents first. Defects second. Then enhancements. Always.

Many IT teams will hit incidents first; if your customers can't use your product for some reason, that needs to be sorted.

However, we only work on enhancements once we've got through defects; only defects waiting on a third party are put on hold.

This is in stark contrast to a lot of development teams where enhancements and defects are worked simultaneously. The problem with this approach is that (i) no one wants to work defects over enhancements and (ii) regression testing is tough.

Our approach does mean that in some releases there's little new functionality. We think that's a good thing; we concentrate on quality.


3. Every work item & every update goes into Trac.

Our defect/incident/enhancement/release management tracking tool is Trac. It's open source, simple but full featured tool that's our day to day management tool. Everything is logged, graded in terms of criticality and severity, assigned to a developer and allocated to a release. We tend to work about 25-30 Trac tickets into each release, with some developers taking only 2 or 3 effort intensive tickets.

Developer updates, testing notes, screenshots and anything else relevant goes into Trac. If we need a progress report, need to write release notes or are affected by a live incident, Trac will tell us what changes to commitments have to be made. Of course, it's all auditable if we need to trace the route of a change back through the process.

This means that we can forecast when a new feature or defect fix is going to be made available and if it should move, then all the relevant parties are informed.

4. The work plan gives the high level resource allocation.

Basically, the development team moves too fast for project plans to remain current, so we have a high level work plan that simply shows who's allocated to what customer (if they're off production) or release (if they're on production).


Having spent a lot of years as a project/programme manager trying to squeeze huge plans onto a small screen in Primavera/MS Project or whatever it's difficult for me to say this but...it's not something we do at Sabisu.

Basically there's no point. We work on such short timescales that a detailed project plan isn't much use beyond 3 weeks and all the detail is in TRAC anyway. All we'd be doing is shifting data from TRAC into a Gantt chart. By the time we've shifted the data the work's done.

So it's easier just to hit TRAC for a report of what's done and what's assigned to the next release. As long as we can get the 'big' bits of functionality into the main code trunk in a safe and sensible way we'll be ok.

You might legitimately ask how we plan the implementation of significant amounts of new functionality; there's a planning process implied in order to get the work into the build. The answer is simply that it happens offline, outside TRAC and the work is broken down into simple, achievable pieces of independent functionality prior to entry into TRAC. If a function is too big to be completed inside a release window, it gets broken down further into chunks that will fit.

Any bespoke customer work is dealt with the same way; we chunk it, tell the customer when they can expect each chunk and go for it.

(Now it's particularly interesting that back in my day at Motorola, we were expected to give the PMO a four week forecast for task completion, separate to the plan. I wonder if the data from the Primavera implementation didn't quite cut it?)

Beyond the high level work plan we have a long term roadmap which guides us in choosing the right features to bring into the product.

Whilst timesheets are done through our own Sabisu application, they're principally done for invoicing  customers as we do some bespoke work; we can be very accurate about how much time we've spent on each task.

5. Flex enhancements out to meet the release date (timeboxing).

Finally, we flex scope all the time. Making the Monday release with quality code is more important than shoehorning in new functionality. Generally, it means the new feature is delayed a week and we've never encountered functionality that's so time critical that it's worth endangering the quality of the code for.



Friday, 9 September 2011

The limits of metadata in the manufacturing enterprise

Moving on from the previous topic of curation being a better fit for manufacturing needs than 'conventional' BI, we should really look at the other data produced in large volumes within any enterprise: documents.

Documents tend to be produced by a fat client on an end-user OS, both of which imbue them with metadata and place them in a taxonomy.
Often both the metadata and taxonomy are of varying usefulness and accuracy as taxonomies are corrupted, folders duplicated and metadata rendered invalid by server processes. At least an end user can make a value judgement about the document and an enterprise search tool has something to index, meaning that from a list of apparently similar documents returned by a search query a user can make an educated guess as to the valuable item.

Once that valuable item has been located, the user might well share the location of the file with a distribution list...

...and that's curation picking up where enterprise search has failed.

When ERP data is considered, you'll find little metadata of value to the end user. Again, it would be a common scenario where an enterprise search returns possibilities and the end-user selects and publicises those of value to the wider community.

Manufacturing systems also generate very little metadata as they're designed around a single purpose, e.g., to log data in real-time. The metadata is limited to that which is necessary to make sense of the reading - you could argue it's not metadata at all. Clearly, in these instances enterprise search has nothing to offer here but expert end-users do; they can identify the key trends and highlight them to a wider community.

Of course, there's an network effect as more curation takes place; as more data is linked together by expert judgement, the value of the network increases exponentially with each link created.

Just as internet search engines are devalued by systematic metadata corruption (link stuffing, spamming, or any other 'black' SEO practice) so enterprise search is devalued by closed, proprietary or legacy systems producing unlinkable data.

And just as on the internet the value of curated content (usually) outweighs that of content returned by a search algorithm, so it will be in the enterprise, where the editors or curators are experts in the technical aspects of their business.

Friday, 2 September 2011

Why conventional BI fails manufacturing enterprises but curation succeeds

Back here I was describing what the terms democratisation, syndication and curation mean in the Enterprise 2.0 environment.

Of these, curation is particularly important to the process industries and perhaps to manufacturing as a whole. And here's why.

The data generated in a manufacturing environment can be thought of as broadly falling into the following categories; documents, ERP data and manufacturing data.

Whilst tempting to exclude documents from any BI discussion it's false to do so; whether in Lotus Notes, SQL or elsewhere, this is where day-to-day manufacturing decisions, events and instructions are stored. They represent a key data source for understanding trends yet are often ignored by BI solutions.

ERP data is typically proprietary and stored deep in an inaccessible database designed with system and process integrity rather than data reuse in mind, to be accessed only by a vendor specific MIS client.

Manufacturing systems data is generally generated with very little metadata by proprietary systems that are designed around a single purpose, e.g., to log data in real-time.

As any business intelligence vendor will tell you, the value of collecting such data is in the analysis of trends; identifying series of points that demand action. Yet the value of such analysis is exponentially increased by deriving relationships between trends, e.g., an interesting manufacturing trend may become a critical decision point when placed against an ERP trend. Causal relationships are what drive effective decisions - decisions which may require considering ERP data alongside manufacturing data alongside operational documentation.

This is precisely where conventional BI fails in the manufacturing environment; it's usually vendor aligned and incapable of dealing with proprietary data from multiple sources.

It's also difficult for end users to get to grips with, which means the enterprise can't leverage the expertise within the wider user base; conventional BI relies on users to be experts in the construction of queries, when their expertise is the construction of manufacturing processes.

It's end users, expert on the business process but inexpert on BI tools, who will spot these relationships and must be empowered to act.

This is curation; without meaningful metadata to make connections algorithmically, expert human filtering and nomination is the only way a community of users can be notified of a relevant trend. This is the real data that needs user collaboration, selected by a user that appreciates the nuances of the community's shared interests.

These users must have easy access to data from multiple proprietary sources; a level playing field that promotes mash-ups and comparisons. End users must be able to identify their own causal relationships and share their findings immediately with the wider community, driving quick decisions and developing knowledge that is in turn utilised in the future. There can be no reliance on IT to enable this process - it has to be in the hands of the end-users so they can act quickly.

In this way, data can be socialised; business intelligence can become social business intelligence; communities can benefit from shared expertise, expertly applied to their data.

Thursday, 25 August 2011

Syndication, democratisation and curation of data within the enterprise; killer combination


Syndication

If we take syndication to mean making content available to other sites/subscribers (e.g., web syndication)  then although enterprises seem to be happy with the idea of syndicating public facing content such as press releases, articles or white papers via social media, the idea of syndicating enterprise data isn't yet mainstream. 

So the definition of data syndication would be that data is published simultaneously in a number of other channels - those channels could be outside, or inside the enterprise.

If you're inside the enterprise it makes it more likely that whatever channel is preferred by a user, the user gets the exposed to the data. Perhaps more importantly, it ensures that whatever the data source is, there's a common way to access it. By syndicating the data into multiple channels, the data can be event driven to the user, who can then take control it - it's there on demand as opposed to needing to be found. So the benefit is speed; awareness; engagement.

The case for syndicating data externally is potent. Firstly, it's transparent, so it builds partnership and trust; for instance, the near instant visibility of customer demand down the supply chain can reduce the 'bullwhip effect'. In addition, real-time situational awareness speeds up vendor response. Perhaps organisations will want to bring customers into the product/service development process even earlier than they already are. Or customers could take a complete, cradle to grave view of product quality.


Democratisation
Democratisation concerns the spread of knowledge amongst end-users as opposed to a hierarchical broadcast mechanism. Kevin Rose (@kevinrose) makes some interesting points about the sheer volume of data on the internet and how democratisation is the masses deciding what's important or relevant.

Users in the enterprise face the same challenges as users on the internet; the volume of data is huge and increasing. (This is why enterprise search is big business - HP are buying Autonomy, Google are pushing their search appliance and Microsoft are working their FAST acquisition/Bing into the enterprise where they can). The enterprise needs the same democratisation capability as the internet. Users need to be able to find and through sharing 'vote up' those key trends and issues that need attention, whether it's the trend of data from a manufacturing plant instrument or an unresolved safety audit finding.

Democratisation means the right attention can be given to the right issues at the right time.

Of course, this is a tough thing to do; the nature of data in the enterprise is different - the historical lack of enterprise-wide search means metadata is lacking or inaccurate and vendors have spent years implementing protected, proprietary, and closed systems. But with intelligent design we can overcome this.

Curation

And so to curation. Buzzword du jour. There's a definite move away from the academic definition to curation as a development of the democratisation of internet content, usually via social media. The key factor is that content is filtered and organised by a human for a community of people with common interests, rather than aggregated by an algorithm designed to respond to a single query.

This is a logical step; faced with huge quantities of available data, there's just got to be some way to make sense of it all; what's worth reading and what's worth keeping? An algorithm can't answer those questions because it doesn't appreciate the nuances of the community and those shared interests. You need a curator to take the step from the personalisation around their interests (dealt with in Web 2.0) to the relevance of content or data to the wider community built around shared interests. 

As I write this in August 2011, there are few content curation solutions out there - most are in private beta and all are aimed squarely at the internet and socialising content.

Just as it's the next logical step for the internet, it's the next logical step for the enterprise. The enterprise is full of communities, from those with a shared interest in cycling to those with a shared interest in energy and sustainability, or safety. Communities are reliable, robust and inclusive; they lead to better decisions and by their very nature engage users and ensure that the relevant data gets to interested parties. A well curated community kills a reliance on email, or any other form of serial information delivery. Developing the idea of curating syndicated data mentioned above opens the door for expert third parties to be consulted or offer services. 

Altogether now…

Simply put, I believe that democratising syndicated enterprise data through user communities is a good thing. Each enterprise and each employee that works within it can be more autonomous, more expert and better connected. It's a killer combination.

Thursday, 18 August 2011

The job of your staff is to populate your to-do list


....not the other way around.

What the hell…? The job of my staff is to do the stuff I tell them to, surely?

Well, no. The problem with a CxO/manager telling the staff what to do is that the staff are closer to the customer. Or the systems. Or the problem.

Now, this isn't intended as a recipe for corporate anarchy, or some sort of democratic leadership. The CxO has a job in terms of putting in place the structures and processes which ensure that the work gets done. (cf. 5 Rules, 8 Commandments). From that point on, the CxO/manager's job is to respond to what the customer/problem facing team is telling them - or needs from them.

So, the CxO/manager having decided that they/you want, say, an agile development team where the focus is on frequent releases at a production level of quality with low risk, it's the CxO/manager's job to get in place:
The simplest tool you can find.
The simplest, clearest, fewest guidelines necessary.
The simplest, clearest, smallest organisation structure you can define.

Then, let the guys on the ground talk to the customer, do their job, work within the tools, guidelines and org structure that's been prepared for them. The CxO/manager's job is to support them.

(You don't need someone to draw really complex processes that look great and never get followed.)
(You don't need to invest in something expensive or complex.)
(You don't - and as a former Programme Manager this pains me - need MS Project.)

For instance, managing a small development team on an agile software product development project, we have:
  • Tool: Defect/issue management software TRAC (an open source solution). 
  • Guideline: We flex enhancements in favour of defects for a weekly release.
  • Org structure:
    • One person takes responsibility for the crucial production trigger points 
    • Everyone has a specialism.
Our sales team approach is looser as the team's smaller:
  • Tool: Salesforce.
  • Guideline: Find new & interesting people to work with.
  • Org structure: One person per sector.
The beauty of this approach is that it's scalable. Want to expand the sales team to talk to more sectors? Duplicate it. Again and again. Want to scale up development? Duplicate it.

Every time you duplicate you increase the value of the information the CxO/manager supplies their teams as it can be used by each duplicated unit. Therefore you increase the value of every question that's asked of the CxO/manager.

Therefore you increase the value of every item your team places on your to do list; you increase the value of your contribution to the company.

I'll let you know how we get on.

Friday, 12 August 2011

Four things I say a lot to my team and what they really mean


Those around Sabisu are used to hearing me say;

1. 'Tell me what you need'

Nothing's worse than screwing up when it could have been avoided. I hate the phrase, 'if only'.

If we're about to lose a sale because we don't have a one page summary describing how great Sabisu is for supply-chain collaboration, then it would be nuts not to ask for it. 

Equally, if a customer needs to know why we've gone with functionality A and not B, or any other difficult question, then put them straight through to someone who can help; clear the decks.

Delegate upwards so those at the sharp end can focus on what they need to do.


2. 'Show me what you've done'

I see this as pretty close to Terry Leahy shopping in his own Tesco stores, or a chef working the pass; it's all about checking the quality. It says in Rework: everything is marketing. So everything has to be high quality.

By constantly reviewing what the team is doing, you don't need to check what the team is doing as much in the future; they'll get used to the possibility of a review and up their game to meet a higher standard.

For me, this is nothing more than the 'test first' approach of, say Extreme Programming, or the 'go to production early' approach of any agile methodology extended backwards into the pre-production lifecycle. Get it in front of the customer early to prevent defects being found expensively late in the development cycle.

3. 'Good job.'

Or 'nice one'. Or 'thanks'. 

(Or 'good stuff'. Or 'rocking'. Or 'awesome'.)

Being enthusiastic. Saying thanks a lot. It's just the right thing to do.

4. 'Riiiiiight...?'

Never smack down an idea until you're absolutely sure that (i) it's been expressed fully, (ii) you understand it, and (iii) you understand its implications. 

I suppose it's about pre-judging an idea. It's easy to do - particularly when you're tired, the kids are playing up at home and the dog has developed a horrific bowel ailment or whatever, but I try to remember that the guys in the office are great at what they do and most of the time their ideas are good.

So, 'Riiiiight?' really means 'tell me more'. They get that.



Friday, 15 April 2011

Guest post for MMUCFE

Lead Us Not Into Evil: my thoughts on ethical leadership - guest blog post for MMU's leadership programme @ 

Friday, 4 March 2011

Why the time is right for the Networked Enterprise

Here I blogged about the cloud as an enabler for the ‘networked enterprise’.

The reasons why I think the concept will take off now are:
  1. The emergence of truly collaborative platforms that leverage the enterprise network.
    (Disclosure: I’m the owner manager of http://www.sabisu.co/)

    Only now is the technology in place; social business software (or Web 2.0 or Enterprise 2.0) situated in the cloud is the only way for the networked enterprise to return value.

    Without these technologies enterprises are back to building specialised point to point bridges with increasing expense blocking implementation and complexity nullifying returns.

  2. Studies are beginning to show the benefits of deploying web 2.0 and cloud technology; this should convince the pragmatists that the emphasis in ‘social business’ is on the ‘business’ part. Until this point, they've had ample anecdotal evidence to belittle the impact of social networking. It's the arrival of studies like this that should change things.

    Of course, the social networking side of things is important but without point 1, the collaboration platforms, you've got nothing better than LinkedIn (which is great, but to a point). If the 'networked enterprise' concept is going to return value, you need the collaboration capability too.

  3. People are ready; Facebook/Twitter/LinkedIn have rocketing user numbers – each of those users is getting a starter course in the power of the network and the ease of using a web 2.0 application. Collaborating with a user in another organisation isn't scary when you've already LinkedIn with them...and even if you haven't, the very existence of these platforms gives you the permission to work beyond your organisation - the psychological barrier has been removed, so the boundaries fall too (as explored in Malcolm Gladwell's The Tipping Point).
  4. Corporates are (almost) ready:
    • Deperimiterisation is coming as Execs start to drive adoption of ‘consumer’ technologies (oh go on, I’ll mention the iPad) in the enterprise. This removes a psychological boundary; they expect at work what they get at home, but crucially vice versa.
    • Savings seduce: IT departments are seeing the time, cost and hassle savings. If an end user can get the service they need direct from the cloud - and for very low cost - then the temptation is to stay out of the way.
    • Corporates are beginning to understand that the biggest security risk is people, rather than some esoteric hack attack.
    • Corporates understand that the ‘cloud’ is an extension of ‘vendor hosted’ and generally they have experience of this already. So it’s not too scary.
Of course, there are obstructions and gaps which will slow adoption. But those who do adopt will have a significant competitive advantage - and now for the first time, they can.

Friday, 25 February 2011

Does the cloud mean enterprise relationships will mimic personal ones?

Until now, the usual behaviour for enterprises is that they work together for the duration of a contract then they part. The concept of the 'Virtual Enterprise Network' is a good example; a temporary affiliation of multiple enterprises to achieve a given aim.


Contractually that might be the case, but an enterprise doesn't actually do anything; it's the people that make up the enterprise that do things, and even though the affiliation is contractually temporary, the relationships between people are permanent. 

Cloud computing can change this; it can provide a common demilitarised zone that many organisations can share. Whether it's a virtual private cloud or a 'public' cloud is irrelevant; for the first time there is a place that's permanently online, always on tap and can be made to hold to common standards. Until now, it's been a case of choosing a partner and investing in development to connect and exchange data with that partner, hoping to offset the whole lot with an ROI calculation that someone with a pot of money believes. Now, you can describe the data exchange in a common standard, hook up to a cloud solution and wait for everyone else to join in - the investment is made by the guys the enterprise pays to host your cloud solution.


This allows enterprises to form links that are more like the links people forge between themselves. How often do you de-friend someone on Facebook or LinkedIn? You don't; you keep them in your network 'just in case'. Enterprises can keep others in their network, just in case.

So the position of the cloud between organisations should allow those personal relationships to be realistically represented at an enterprise level; the links between organisation can left in place indefinitely, to be used as required, with little cost implication.

This would support Kevin Kelly's argument in What Technology Wants that over time technology trends towards complexity, for if the cloud is indeed a common DMZ then the future looks very complex with every single organisation generating more connections and never deleting old ones.

The implications are many but here are some that occurred to me:

  • The walls around an organisation are destined to become ever more porous. It's unavoidable that there'll be multiple external cloud solutions that the enterprise will want to connect to. 
  • Independent consultants and SMEs (with the emphasis on the 'S') could find themselves on a level playing field with much bigger organisations.
  • Ultimately the enterprise could become dominated a loose affiliation of knowledge workers, rather than dedicated employees.

And it means that there's no such thing as a 'virtual enterprise network', or a 'collaboratively networked organisation', or any other variant: there's simply a networked enterprise. 

Wednesday, 2 February 2011

Further thoughts on MMORPGs & collaboration

This is a great blog post:
http://shanleykane.wordpress.com/2011/01/31/online-collaboration-sucks/

For me, that's what blogs are all about; a personal mash-up of experiences shedding light on something new.

I totally buy it. Back in '92, MMORPGs were MUDs and were all text over Telnet; I was a heavy user of Razor's Edge (a CircleMUD in Liverpool) and Infinity (based out in the US somewhere) at the time. Now, at that time MUDs migrated into being 'talkers', which were literally just text chat rooms. This didn't have the same attraction to me because on a MUD you were doing something, whether it was trying to level-up, get hold of a particular bit of kit or whatever. The attraction was working (gaming) together, in a shared environment, with instant communication and the right toolkit for the job...

Anyway here's how I'd extend @shanley's blog into the corporate environment:

1. Collaborators need the raw data

Most workplaces are much duller than the environment created in WoW. Corporate life is spreadsheets and docs; it's data. So when @Shanley describes it as a 'shared experience', for the corporate it translates simply into access for all collaborators to the data that describes the work to be done. It absolutely can be a shared experience though; you just need a single place where everyone can get to the same data at the same time on their terms..

...which means that IT departments need to move from compliance and control, to enabling and de-obstructing; the modern IT department function is to get the business data to where it needs to be - which might be somewhere outside the enterprise...

Perhaps the one place where you could justify a rich graphical interface is where the physical environment is important, say, if you're trying to collaborate on a problem with physical constraints. This could be valuable in hazardous manufacturing, petrochemical or nuclear plants. Real potential for 3D WoW interfaces there.

But for most office workers I think that in practice the interface has to be a portal variant; whilst the experience may be shared, each user needs access to the data, tips and tricks that allows them to add value - in the same way that a user on a MMORPG has particular capabilities or tricks up their sleeve.

I think that's a pretty good analogy for a day at the office.

2. User Autonomy Means Increasing Complexity


So it's going to be awfully difficult to hold everyone's personal configuration isn't it? Well, I don't buy that. Technology has moved far down the path of personalisation but it's not going to stop any time soon. I accept that a corporate is going to want to set users off in the right direction (e.g., everyone in a particular role starts with the same UI layout) but if I can re-configure it to make it work better for me, why can't I?

As Kevin Kelly says in 'What Technology Wants', the technology trends towards increased complexity; that complexity is everyone in the world doing it their way.

If every user is forced to look at their work - a document, project or whatever - in the same way, you lose the illumination that a shift in perspective can provide. The data may not change, but I might choose to view it in a different way...of course, this will root out those who depend on presenting data in a particular format to support their position.

3. Communication Features != Collaboration App

Couldn't agree more. Every application trumpets it's collaboration capabilities. Without the sharing of the data that describes the issue/project/work item itself, you can't collaborate on it, because what you're actually doing is going after the data, copying and pasting it into a messaging system (email, OCS, Lotus Notes, Google Docs).

You have to have the raw data to hand so that you can have that 'shared experience' and truly collaborate.

If that's the case, how many collaboration systems are really out there?

4. Sounding Off

I think finally the technology is mature and the corporates are looking to the value and competitive edge of working closely with their partners, suppliers and customers. The solutions are just beginning to become available; the value propositions are only just being clarified and supported with case studies and benefits evidence.

The main obstacle is in the minds of the risk averse IT departments, who are still blocking Facebook/Twitter/LinkedIn, and have yet to realise the power of the network.