Friday, 13 November 2009

What is Enterprise Architecture?

In the LinkedIn group The Enterprise Architecture Network, Kevin Smith posed the challenge “Describe the purpose of EA in one 160 character SMS message”. This is the modern equivalent of the elevator pitch for justification of Enterprise Architecture. My contribution ...
Provide the tooling to deliver the necessary and sufficient processes of the organisation optimally, responsive to external demand, and with a record of reason.

Friday, 2 October 2009

Demo vs. Production BPM-based Systems

In his article Demo vs. Production BPM-based Systems Anatoly Belychook sounds a warning that the habit of BPMS solution vendors of presenting their suite as a complete application development solution through 'demo' applications can lead to serious problems for users when they try to product ionise even a single process. I have quoted Anatoly's points in full and annotated them with my thoughts.

  1. The user portal - web application that starts processes, displays the list of tasks assigned to the user, manage activity forms for these tasks, monitors and administers processes. It will have different design in production and most likely different functionality too. If you’re lucky you will be able to customize out-of-the-box portal but be prepared to rewrite it from scratch at some point. Or to get away from a standalone BPM portal completely and wire process functionality into corporate applications. The reason: users typically do not accept BPMS supplier’s opinion that BPM should be the center of user’s universe. How true. It may be unfair to label the provision of a BPMS-centric portal or application as a fault or failure of the manufacturer ... it would be very hard to sell the product without being able to show an end-end model of implementation. The real failure is for the BPMS solutions to be presented without any clear directions of how they fit into common enterprise technical architectures and then black-boxing the behaviours of the BPMS.
  2. In particular, you should eventually get rid of “start process” button. From user’s perspective, he doesn’t “start a process” but do something real e.g. accepts the incoming order or submits a request for vacation. The system must start the appropriate process transparently. Insightful ... just like the rest of the IT/IS components the BPMS is a tool that transparently handles a bit of business communication there is no need to bring the terms of bpms (message, process, instance ...) into the end user lexicon.
  3. Be prepared that activity forms generated by BPMS in few mouse clicks will no longer meet the functionality, usability and design requirements at some point. So it’s better to have an idea how will you eventually develop these forms in terms of tools, labor force and
    costs. The importance of this issue can not be overestimated: what good is that the process scheme is depicted in two days if forms development for this process then takes say two months? (I do not play down the importance of rapid prototyping of screen interfaces - it’s the must for BPM, one won’t even come close to production without it.) By the way you probably would like to use the same tools to rewrite the BPM portal. BPMS implementations do not spring into a virgin site, consider how the enterprise views its technical architecture. Perhaps the architecture is layered, with presentation separate from the 'application code' or 'business service" . Then you will want to devise your working surface around the user actions and communicate with the BPMS with messages at the appropriate point. Of course, those messages may contain standard (within the enterprise) objects so choosing a BPMS that forces definition of the message from a form layout will not be a good idea.
  4. Similarly you will no longer be satisfied with out-of-the-box reporting and monitoring tools at some point. Remember that this is about business requirements and a fancy dashboard updated every second showing messages per second by type is not useful beyond the server room. First catch your business requirement, then match it to the offering of the supplier. The most likely solutions will be found in generic reporting mechanisms but you will need to understand the operation of the BPMS to integrate its information about queues, instances in progress etc with information from other enterprise sources (stock on hand etc).
  5. Demo and pilot processes typically store all data in process attributes, process variables or operands (different systems use different terminology) but only relatively insufficient and/or temporary information will be stored this way in production. Most data will go into a traditional database and only the primary key of the corresponding record will be stored within the process. Considering the process of client purchase order negotiation as an example, the information about the client and the order items are likely to be stored in a database while customer and order identifiers will remain in process attributes together with the deadline date for the call to the client. The reason to act this way is obvious: data which may be of interest after the process instance has ended must be stored so that
    they could be accessed independently from the process instance. This also means a separate user interface to this data independent from process screen forms. As for the process screen forms, they should access both process attributes via BPMS API and database fields via DBMS API. This actually requires some significant design consideration. In general, a process is a long running and non-ACID transaction. There may be instances where the information in the process instance represents a (useful) past state of data in a corporate database.
  6. Building on the previous item, most likely the part of the long-term information (but usually not all) already have a room at your existing enterprise applications. Accordingly, the process attributes will store only the identifiers of the appropriate business objects and
    process screen forms will access the data stored within the application. (The latter isn’t an absolute requirement - the total integration is often very time-consuming so partial integration may be more justified.)
  7. Similarly, while a demo or pilot most likely will store related documents (usually Word or Excel files) as attachments to a process instance, you’ll have to consider something more solid for production. The reason is the same: if the document may be of interest after the
    process instance has ended, then it must be kept independently from the process instances and user access to it must be provided independently from the user interface to the process. However you don’t need the full-blown ECM system: because BPMS takes care about the workflow you need only documents storage functionality with basic interfaces (user’s and programming) and services (search, archiving, security). If you are considering technical architecture components as a whole you have the opportunity of avoiding top-end document management systems which might be chosen for their version control and workflow capabilities because these can be realised in the generic BPMS toolset. However, just because a document management system has workflow capabilities, assuming that every process can be managed through its documents will provide worse problems than assuming that a BPMS suite can generate all the business application solutions.
  8. Users authentication and authorization in a demo or pilot is usually done via independent LDAP directory, database or even a static list stored in the XML file. It is obvious that production system should utilize your existing user directory. But a bad surprise may be
    the amount of effort it requires. To start with there are usually several such directories. A typical example: an Active Directory, a separate authorization system within the legacy accounting system and a database keeping the users of remote offices and partner companies. As the project evolves additional requirements may arise e.g. the planned
    absence and automatic rerouting of the tasks. It is known that for a company having about a hundred of users Active Directory implementation alone is a non-trivial project and now we are facing more difficult task. As a result as much as 50% of total BPM project costs are spent on authorization and authentication issues at some projects. Imagine for a moment that it happened in your project and you didn’t take it into account in project schedule: you are out of schedule and budget for as much as 100%! The bottom line here is that the BPMS should be considered in terms of the ease or difficulty to implement within a separately chosen Identification and Authentication solution. As the world does not stand still the interfaces for authorization and authentication should be expressed in terms of standards rather than supported products.
  9. For obvious reasons not the most complex business processes are taken for demos and pilot projects. That would be all right but worse than that, they are usually technically implemented as a single process thread. But in reality even the relatively simple employee onboarding process technically consists of several processes communicating with
    each other (it’s enough to notice that processing the incoming resumes is not directly related to the publication of vacancies). This is even more true for end-to-end processes that are of greatest interest in terms of business (see “End-to-end Process Orchestration” antipattern and “Internal Order” pattern). Accordingly, you will need more functionality from your BPMS pretty soon - not only the orchestration but also choreography. Modern BPMS are fine with that but if a rudimentary workflow and/or document management built into your accounting system is all you have then you may be in trouble.
  10. And finally, a production system differs from a pilot by reliability, performance, security … but these are standard requirements not specific to BPM. Failures of the BPMS service and the infrastructure that support it have to be handled in the same way as any other operational service (separately from failures of the business process). Recovery from failure is complicated because the state of database that supports the BPMS operation is generally not synchronised other corporate databases and certainly not with invoked services.

BPMS solutions do offer a way addressing IT/IS delivery issues but they do not eliminate the need for the basic requirements, design and systems management that apply to all business solutions.

Wednesday, 16 September 2009

Advantages of having a business component-level blueprint of all public services.

Sheri Loessl on the IBM Blueworks collaboration site pointed out the value of having your government blueprinted as though it was one large business enterprise.
Having a holistic view of the business of government will aid government leaders in their strategic thinking, and in decision making on initiatives or programs that must span multiple agencies or governmental jurisdictions. Having a unifying master business model of all areas of government, can help government officials to 1) identify opportunities to realize efficiencies, 2) deliver programs and services more effectively and innovatively, 3) identify and align roles and responsibilities to enhance collaboration across government (and beyond) and; 4) reduce time-to-service. The ability to look across program area or jurisdictions and find areas where sharing services and collaboration for better outcomes can be realized.
For all those working as enterprise architects in the public sector, there are benefits that can readily be realised from this approach.
  • Don't have to reinvent the processes of government for each department or agency (there is also commonality across systems of government - Westminster, Federal ...)
  • Goals and measures can be consistently applied from top to bottom
Lets get out there and share or borrow these organisational components before re-inventing the wheel ourselves.

Thursday, 10 September 2009

Smart Meter Privacy Issue

For the ultimate monitoring of your home life consider the humble electricity meter now being updated to the internet age. This post covers the issue in some detail. Smart meters, like other devices that are associated with what you do, have the underlying privacy genie that , once out of the bottle, will be a devil to get back in.

Thursday, 3 September 2009

Petabytes for your datacentre

This from Backblaze takes me back a few years when I had to contribute to a cabinet paper to plan for the purchase of a new disk drive (just one and well less than a gigabyte!).
There is a neat chart demonstrating the difference in cost between what you pay for raw disk units at your mail-order supplier and storage as a consumable. How does a pile of disk drives  at us$81,000 become a staggering us$2.8million from EMC or Amazon?
There is also a how-to which is complete down to the rubber bands needed to damp the drive vibration so I expect my geek son to be warming his flat with one of these.


Saturday, 8 August 2009

Locational Privacy

EFF (Electronic Frontier Foundation) has published a great article covering the implications that location-aware services and technology have on privacy.

Transit passes and access cards

Another broad area of application is for passcards and devices
allowing access to protected areas; for instance, passcards which allow
access to bike lockers near train stations, or cards which function as
a monthly bus pass. A simple implementation might involve an RFID card
reporting that Bob has checked his bike into or out of the storage
facility (and deducts his account accordingly), or equivalently that
Bob has stepped onto the bus (and checks to make sure Bob has paid for
his pass). This sort of scheme might put Bob at risk.

A better approach would involve the use of recent work on anonymous credentials.
These give Bob a special set of digital signatures with which he can
prove that he is entitled to enter the bike locker (i.e. prove you're a
paying customer) or get on the bus. But the protocols are such that
these interactions can't be linked to him specifically and moreover
repeated accesses can't be correlated with one another. That is, the
bike locker knows that someone authorized to enter has come by, but it can't tell who it was, and it can't tell when this individual last came by. Combined with electronic cash, there are a wide-range of card-access solutions which preserves locational privacy.



The time has come for the unnecessary collection of personally identifying information by transport operators to stop, permanently addressing this aspect of locational privacy.

This subject surfaced briefly with the introduction of the Snapper transport payment card in Wellington but was not addressed practically by the transport operators who appear to rely on assertions of the security associated with the device rather than prevent the undesirable uses that the gathered information may be put to.

The technology required for anonymous credentials is now practical. Legislators and privacy guardians should move from the wording policy statements to demanding that personally identifying information is not collected unnecessarily.

Thursday, 6 August 2009

Architects work to the Maker's Schedule

Paul Graham's essay on the manager's schedule and the maker's schedule provides food for thought for those of us that are expected to come up with ideas to deadlines.

There are two types of schedule, which I'll call the manager's schedule and the maker's schedule. The manager's schedule is for bosses. It's embodied in the traditional appointment book, with each day cut into one hour intervals. You can block off several hours for a single task if you need to, but by default you change what you're doing every hour. .... But there's another way of using time that's common among people who make things, like programmers and writers. They generally prefer to use time in units of half a day at least. You can't write or program well in units of an hour. That's barely enough time to get started.
So manager-type, you have asked the enterprise architect to come up with a new vision and roadmap for the business and technical architecture ... does it really help to haul them into adhoc meetings at short notice to ask about your current pain? Try cornering them at the coffee machine instead!

Wednesday, 29 July 2009

Drools + BPMN 2.0

An encouraging announcement from DROOLS indicating commitment to BPMN 2.0 at notation and xml representation level. This is how standards adoption should work ... do not wait for the tedious ratification and adoption votes; commit to the standard (whatever it will be); implement as it gels so that people can get used to it; fine tune implementation to standard as it gets to the ratified state.
I would put in a plea for a formal compliance document, stating what part of BPMN 2.0 is not yet implemented, so that early adopters do not waste time trying to decide whether it is the implementation or user's dumb specification of a business process that is at fault.

Saturday, 11 July 2009

Seastead - an Ark for Tokelau

I returned from a few days off-net to a barrage of articles and emails that I really should get down to but a couple came together so appropriately that they interested me far more than the  dry stuff about business process that I usually have to force into the brain.


In a well thought out article in the NZ Listener  , Ruth Laugesen describes the plight of the Tokelau islanders in the face of climate change.

New Zealand these days, idle conversation can turn to climate
change and what it might hold for our children and grandchildren. In
Tokelau, which has been settled for 1000 years, such conversations are
almost too difficult to have.

“At the end of the day we will be
the first people to go underneath the water,” says Toloa, the ulu, or
head of Tokelau’s governing council.

“It could happen at any
time. There could be one cyclone where the whole island could go
underneath the water. It’s quite difficult and it’s quite painful to try and
accept the fact that one day we may wake up and we are underwater,”
says Toloa, on the phone from Apia, Samoa, where Tokelau has its
administrative base.“So it’s not a good feeling. We’ve heard the
Al Gore presentation and know all [about] global warming and all that
kind of stuff,” he says.

Tokelau, population 1416, is a
self-governing territory of New Zealand and a forgotten frontline for
climate change. Two other Pacific atoll micro-states, Kiribati and
Tuvalu, have become international symbols as some of the first nations
that could become inviable as a result of climate change. But Tokelau,
as a low-lying atoll state, is just as vulnerable.

New Zealand’s
history as a tinpot colonial power means Tokelau’s people are New
Zealand citizens, have a New Zealand flag and, bizarrely, observe
Waitangi Day as their national holiday. In Wellington, the Ministry of
Foreign Affairs has an Administrator for Tokelau. This year New Zealand
will give Tokelau about $17 million in aid.

Coincidentally, an article from Inhabitat, described the current thinking about The Seasteading Institute whose modest mission is 

To further the establishment and growth of permanent, autonomous
ocean communities, enabling innovation with new political and social
systems.

Seastead have this visionary approach to above the waves living with a fairly modest $5/sqm target.



On the one hand, we have pacific islands, atolls actually, and their populace disappearing beneath the ocean. On the other, a plan for establishing communities living on the ocean.

Although the Seasteaders aim to avoid problems with territorial authorities by moving around on the high seas, Tokelau and other island groups could utilise the same technology to address the rising seas that will eventually engulf them. Even awash, the atolls would provide protection from extreme waves. With mobility a design  feature of Seasteads, getting out of the way of cyclones would be a benefit sought by many pacific islands.

Why even think about spending large amounts of money on keeping the islanders in the middle of the ocean rather than relocating them to South Auckland? Well NZ does claim a large Exclusive Economic Zone around Tokelau which would be hard to sustain if they are abandoned.




Monday, 25 May 2009

Auckland Super City IT Costing

 
These two statements seem well out of step.
Merging council IT systems to create an Auckland "supercity" will cost
the best part of $200 million and could take eight years to complete,
according to consultancy firm Deloitte.

The [total] estimated integration costs have been assessed
to range in total between $120 million and $240 million over a
four-year implementation time frame from the Royal Commission Report
Did the Royal Commission ignore the costs of systems? Are IT
organisations and consultants taking the opportunity of change to gold-plate systems or
include the cost of deferred maintenance and upgrades?
While each organisation might have a different system for rating, dog licences etc., the business functions that these systems support are the same before and after implementation of the super city. The business of the new Auckland Council is an amalgamation and hopefully a slimming of the business of the existing authorities. While the changes for IT are not trivial, I suggest that the line by line examination of budgets does not stop at central government and someone asks hard questions along the lines of "why can't one of the  existing finance sytems, dog licencing systems etc be scaled up to cater for the increased population?".

Saturday, 23 May 2009

Heading into the clouds

I am taking my first steps into the Intalio Cloud. As an individual and for my small business, I like the concepts of cloud computing and use the Google family of services as part of my normal daily operation. This started as a Google Doc. Having an interest in the use of BPMS to formalise operations within and between organisations, I have been following the Intalio BPMS offering for some time and, despite some ragged edges, I find it a good approach.

Intalio Cloud is an interesting prospect. Where do you place it in the taxonomy of the cloud? Is it providing storage, compute power, platform, value added services ... ? Not only all of the above but apparently a range of hardware and services so that you can be your own cloud provider. An interesting scalability equation, I can operate somewhere in a blackbox datacentre with 2 users for free ; expand into a productive organisation at $x per user per month; form my own cloud service datacentre (using surplus power and cooling capacity of NZ South Island) all without changing the operational business processes.

So I will be doing a bit of tyre kicking and a much thinking about the security and other risks associated with putting the fundamentals of business operation out into the cloud. One problem with adopting a business solution in the cloud is that you may not pay much attention to what is going on to give you the results. You may trust that availability of the underlying components in the black-box data centre will be sufficient for your needs as you grow, and that your operation is secure. That last one is a bit problematic ... out of the box Intalio has you logging on with userid/password across http rather than use encryption (even https would be a great advance).

Wednesday, 8 April 2009

BPMN - Why and how of Signal

Thanks to Rick Geneva , I do not have to describe 'What' a Signal Intermediate Event is used for in a business process. He has provided the use cases for this useful element of BPMN. I responded quickly - now all we need is standard implementations in products like Intalio, pointing out the lack of implementation of signal in tools that support BPMN through to an executable. Of course, we can work around any lack of implementation of signal within a BPMS toolset but the meaning behind signal is not like anything else so a real implementation or common patterns of workaround would help.

You can draw the signal event in Intalio so at least the business process designer can start with a proper description of the use case even if the level-3 technical implementation diagram will differ in shape.

The BPMN specification says
A signal is a generic, simple form of communication
  • Within pools (same participant)
  • Across pools (different participants)
  • Across Diagrams

It is communication in its simplest sense ...
  • shouting out not knowing if anyone is listening, or has heard
  • and listening for shouts unaware of any other listeners

Signalling within pools

This very important as BPMN deliberately restricts message flows to communcation between pools so if you have parallel flows within a pool and you wish to communicate an event between the flows, a signal is the only BPMN mechanism available. For example, a simple synchronisation of parallel flows would be represented like this in BPMN.

Without signal, the workaround is to use a message via another pool like this ...

This does have the merit of working in Intalio but clearly deviates
from the simple expression of the business process. Adding more
listenerswould seriously obscure the meaning of the business model with implementation artifacts.

Signalling across pools


In communication across pools, the signal has a single sender and one or more listeners while, with messages each sender is connected to a listener. Here is a simple synchronisation between parallel processes.

Replacing the signals with messages even in this simple case loses the clarity of expression of the business process


In this case the single activity of signal in master process is replaced by two message event throws and it gets progressively more complex as more processes are introduced. In addition, for every participant introduced, a change has to be made to the model of the process doing the signalling although in practice no change is made to the real business process.

Across Diagrams


The lack of a signal implementation is especially felt where the business is modelled across a number of diagrams and there is a common need for communication. An over-riding interrupt perhaps. Each separate diagram may represent a division of the organisation by end-to-end process or department with separate reactions to the communication (like a fire alarm). Implementing without signal has similar problems to the above but the separate diagrams make the implementation and business models even harder to relate together. The link element is a candidate for solution here, but implementation is missing in Intalio. So we really need a full publish - subscribe service. RSS might be a practical solution to explore.


Conclusion

There is a real need
  1. for an implementation of signal events within tools that develop the executable from the BPMN model.
  2. in the absence of the implementation of signal events, a well understood implementation method or pattern is required for each of the uses of signal.


Monday, 16 February 2009

Simulation


Bruce Silver points out that business process tools do not do simulation well or usefully and lists some features that should be present in the BPM toolset to facilitate simulation.
This would be great but a simpler approach is to use the much more mature pure simulation tools (like Simul8 ). In that scenario, you can design your process using your favourite BPM modelling tool; transfer the model to the simulation tool adding the simulation properties and data; devise the optimal process under simulation and transfer the new model back to your process design tool.
The overall approach is summarised by this diagram from Simul8 but any product that exposes its data structure could be used in the same way.
Having a standard BPMN schema would help the simulation product people assist.

Wednesday, 11 February 2009

Project management vs. process manage...

Ayalew Kassahun raised an interesting question on the LinkedIn BP Group
For many it may be weird to imagine a world in which no distinction is
made between projects and processes. However, I think that every
process instance is a mini project. ... Just for sake of discussion I would suggest that it is more appropriate
to make no distinction between project and process management. In such
a world what will then be the consequences on management,
specifications and tools?

Firstly, I think that distinctions between process and project are hard to make outside the toolsets that attempt to specialise.

My interest is in business process management toolsets and I have been following much of the discussion about BPMN, BPEL and how the process model should end up being 'executed'. I am convinced that there is a need for a specification of the process that endures from the business idea through to technical execution of an instance of the process.

If we merge the concepts of process and project, I can see some real benefits in the process design world. For example, some analysis and modelling of the process can be done at the instance level rather than engineering a highly complex general model which attempts to deal with every possible eventuality. This alone has the benefits of redeucing impact of process design bottleneck and allows for a common way of dealing with business activities leading to common reporting, management. We would no longer have to treat activities differently if they were being handled through a PMO or through BAU.

However, in the BPM world, the toolsets are not very mature and some radical rethinking would have to take place.

Because the instance of a process/project could change during its life, then tools that do not have a tight relationship between model definition and execution will really struggle to deliver. The executing process will need to be changed through the business expression and not through some programmer intervention. So I would expect the run time BPM system to be executing a development of BPMN rather than a transformation in BPEL,JAVA or whatever.

Project management vs. process manage...

Project management vs. process management



Ayalew Kassahun raised an interesting question on the LinkedIn BP Group
For many it may be weird to imagine a world in which no distinction is
made between projects and processes. However, I think that every
process instance is a mini project. ... Just for sake of discussion I would suggest that it is more appropriate
to make no distinction between project and process management. In such
a world what will then be the consequences on management,
specifications and tools?

Firstly, I think that distinctions between process and project are hard to make outside the toolsets that attempt to specialise.

My interest is in business process management toolsets and I have been following much of the discussion about BPMN, BPEL and how the process model should end up being 'executed'. I am convinced that there is a need for a specification of the process that endures from the business idea through to technical execution of an instance of the process.

If we merge the concepts of process and project, I can see some real benefits in the process design world. For example, some analysis and modelling of the process can be done at the instance level rather than engineering a highly complex general model which attempts to deal with every possible eventuality. This alone has the benefits of redeucing impact of process design bottleneck and allows for a common way of dealing with business activities leading to common reporting, management. We would no longer have to treat activities differently if they were being handled through a PMO or through BAU.

However, in the BPM world, the toolsets are not very mature and some radical rethinking would have to take place.

Because the instance of a process/project could change during its life, then tools that do not have a tight relationship between model definition and execution will really struggle to deliver. The executing process will need to be changed through the business expression and not through some programmer intervention. So I would expect the run time BPM system to be executing a development of BPMN rather than a transformation in BPEL,JAVA or whatever.






Monday, 9 February 2009

BPMN - Hard to Code???


Bruce Silver continues the valuable discussion on BPMN semantics and challenges a perception that BPMN has vague semantics.
In the example chosen, I agree with Bruce that
  • having flows from downstream activities BPMN is not best practice (in that it can lead to misunderstanding by the casual reader who may more be familiar with basic flowcharts)
  • there is only one reasonable interpretation of the required execution of the BPMN and therefore it is not an example of "vagueness".

Looking at the definition of the Inclusive Gateway in BPMN 1.2 , I might accept a criticism that it is hard to read in English and may benefit from formalising.

9.5.3.2 Sequence Flow Connections
This section extends the basic Gateway Sequence Flow connection rules as defined in “Common Gateway Sequence Flow Connections” on page 72. See Section 8.4.1, “Sequence Flow Rules,” on page 30 for the entire set of objects and how they may be source or targets of Sequence Flow.
  • To define the inclusive nature of this Gateway’s behavior for converging Sequence Flow:
If there are multiple incoming Sequence Flow, one or more of them will be used to continue the flow of the Process. That is,
  • Process flow SHALL continue when the signals (Tokens) arrive from all of the incoming Sequence Flow that are expecting a signal based on the upstream structure of the Process (e.g., an upstream Inclusive Decision).
      • Some of the incoming Sequence Flow will not have signals and the pattern of which Sequence Flow will have signals may change for different instantiations of the Process.
Note – Incoming Sequence Flow that have a source that is a downstream activity (that is, is part of a loop) will be treated differently than those that have an upstream source. They will be considered as part of a different set of Sequence Flow from those Sequence Flow that have a source that is an upstream activity (my emphasis).

However it is clear from the phrase "... expecting a signal based on the upstream..." and the note in the section above that the inclusive gateway in the example has no (merge) function in the event that a sequence flow from the downstream exclusive gateway is processed. The token from the loop back is the only one that can be 'expected' at the inclusive gateway.

It will only be hard to code, if the developers have started from a point of view that each node (activity or gateway) can be transformed into an executable form (or executed directly) in isolation. The developer must consider the source of the signals (Tokens) and the structure of the process as a whole.




Thursday, 29 January 2009

BPM high-wire act

Intalio appears to be eating its own dogfood very publicly, and on a highwire.
I wish Ismael and his team well with this venture as it will demonstrate that armies of strange developer types are not the key to success in getting value from a business process-centric view of solution delivery.

They are working with real business processes and have very quickly demonstrated two completely different approaches to BPM solutions with the same product.
  • A Customer Support Process is presented in the early stages of BPMN expression (Level 2??) and is full of clearly recognisable business steps (like 'assign to support team..').
  • The Marketting Process allows the process instance to be configured from a basic template at runtime rather than design time and the BPMN diagram is far more abstract. Perhaps there are some real developers around after all!


The template approach allows the business process to be finalised for each instance in a configuration table and should lead to a stable technical implementation without analysing the business process to death. In doing this, some of the benefits of BPM may be lost. If you look at an instance of the Marketting process through a reporting tool you will see that you are at Step N within a loop of Steps but with no real sense of flow. In contrast, looking at a customer support process instance will show where you are, and how you got there. Without wishing to fan the flames of the executable BPMN v BPEL debate, tailoring the pattern at the BPMN level and executing the result, retains the business-level communication of process requirement throughout the lifecycle.

I look forward to seeing the final implementation.