Remove the Stupid degrees of freedom

This is what almost all progress in areas of productivity and security is about – remove the stupid degrees of freedom. Sure there are people out there arguing that life is less electric when builders do not fall off skyscrapers in production due to guard rails in front of all +1meter drops. And yes, they will also argue that driving with the seat belt fastened is for people that should not drive a car in the first place.

You may wonder what seat belts and not falling off tall buildings has to do with degrees of freedom? Your body is free to continue in your given speed even when the car halts abruptly if you do not where a seat belt. Hence you have a degree of freedom that will never ever be beneficial to you – and that is what I mean when I say “stupid degree of freedom”. The seat belt effectively removes your freedom to die violently even with collisions in moderate speeds.

Your gut reaction might be to say STOP I want to keep all my degrees of freedom – even the stupid ones – those that will never be beneficial to you. But if you are an intelligent and sane person you will soon come to the conclusion that I am right – stupid degrees of freedom should be removed wherever possible.

We have been removing and reducing the stupid degrees of freedom since intelligence took off. Reducing in like – do not smoke – or at least not continuously – if you want to stay alive. We put up traffic lights – and even if there is nothing stopping you running a red light – most sane people just choose to not use the stupid freedom to crash out in crossing traffic, and most of us appreciate the help the traffic light gives us in order to avoid a possibility that is stupid. We put lids over the sewage-entrances in the street – making it almost impossible for you to fall in – even though you could just as well avoid all those holes. The lids are effectively removing the stupid degree of freedom to fall down in one and break your back.

You get my point. Once you start to look at improvements in any field as reducing stupid degrees of freedom you will see that the description is a good one.

But why do I bring it up?

Well we need to talk about software development.

You think there are a lot of degrees in freedom in the physical world? Well there are millions and millions more degrees of freedom in the software world. Most of those freedoms are stupid and never beneficial – like software bugs.

In the physical world we have an environment that we cannot argue with or redefine; like we cannot walk through a brick wall and we will not turn into smoke for no reason. But in the software world we have no such basic limitations unless we stipulate them ourselves.

Any sane software developer would set up some basic rules to reduce the most stupid degrees of freedom you may think?

Do not be too sure.

This is were things turn ugly pretty quick – there is a standoff between developers using strongly typed languages (c#, Java, c, c++) and those that use weak/dynamic typing (javascript, assembler ) https://en.wikipedia.org/wiki/Strong_and_weak_typing

In short you can think of strong typing as “I first set up rules – then if I break them the compiler will let me know” – and that is pretty much to say what should be allowed and remove all other stupid degrees of freedom. Dynamic typing removes very few stupid degrees of freedom – and you will need to be very careful or your program will not do what you wanted it to do – very macho – “if you cannot avoid stupid mistakes you should not touch a keyboard in the first place”.

Most software developers has a self image like this:

image

But if you are focused on the result rather than being very cool you are probably looking for someone like this:

image

Who has the most degrees of freedom? Who is looking more professional? Who is almost guaranteed to survive the day?

When it comes to software development you want to ensure that:

  1. You have quality built in so that you need very little test to verify basic function
  2. You should be almost guaranteed success on reaching the functionality you want without people having to act heroic in any way

The way to reach this is by starting to remove many of the none beneficial degrees of freedom from the program/system.

But how can the system know what is beneficial to you and not? Well that is where the specification comes in – what is the system supposed to do? If only there were a way to declare what the system should be all about… Like UML.

What UML does is something that any sane person would want for any endeavor they take on; it defines the base rules on what you want to allow – and it does so in a way that has no alternate interpretation.

That means that UML is the un-ambiguous language you would want to write a specification for anything in. You would want to express your specification for software unambiguously since there are so many possible degrees of freedom that are just plain stupid – but sadly more likely (just because they are so many) than the degrees of freedom you really want.

Ok – I have now reached the point in my argumentation where I have left you with a suggestion to write a perfect specification in order to have a perfect system – and you may think that this is not really helpful – since it impossible to decide in before hand what the specification should look like. After all there were good reasons we ditched the waterfall-planning-strategies of the 1990’s in favor for the agile approaches with short term goals stacked on top each other. True. But I am not finished yet. There is a piece missing.

  1. We must describe what degrees of freedom we actually want in order to be able to avoid the ones we do not want.
  2. If we can manage to describe what we want in a language that has no room for alternate interpretations – then implementation has no degrees of freedom left – it must follow our specification
  3. If the implementation must follow our specification there is no risk involved and we will always succeed

Wow. Success is guaranteed?! Yes – if there is no need for free climbing heroes in the production – it will never fail.

Now let us take this approach one step further: if there are no degrees of freedom left when we follow the specification to get the result – then we can easily build a machine that does it for us – after all – very little smartness is needed following instructions that are complete, exact and with no room for interpretation.

Now you have MDriven.

And now we can take back the agility – but not on the level of implementation and coding – but on the level of specification.

This is the reasoning that has led us to claim that MDriven is a million times faster than traditional coding praxis’s:

MDriven works by allowing you to describe what your system should do – then – a second later – your system allows you to do exactly that.

The true value of your efforts are in the description of the system – just as it should be – this is the intellectual capital that survives tech-shifts that will happen as time goes by.

This piece was written to help you to understand why some of your hero-developers  might be reluctant to adopt MDriven. But a strong hero will always be needed for stuff – we are just saying that you do not need heroism to write 99% of the software you need today and tomorrow – a good evolving specification will do just fine – and you still need really sharp heroes to think out how to evolve the specification to reach your true goals.

Excel Rocks! But…

If you do solutions for colleagues with Excel you have the ability to build enterprise grade multi user information systems with MDriven in about the same time.

Excel is amazing – it empowers you. We want to empower you even more with MDriven.

Excel is so great because you do not need the IT department to get things done – you do not need to explain every tiny detail that you cannot articulate until you see it for yourself. You can work very iteratively and arrive at a solution that works now and can be improved tomorrow. You can do ANYTHING and EVERYTHING.

Excel is however frustrating since your solution is depending on strict definitions on what positions in sheets mean – and there is no easy way to communicate the relations between cells other than using column and row references. It is easy to lose track of the solution as it grows and you go from being very excited to hating it – back to excited – and so on in a rapid pace. Part of this is natural developer angst – and a big part is fixable by raising the level of abstraction with MDriven.

Excel is sadly not very safe when having multiple users because there is no difference between using the solution and changing the solutions – and some of your users will change rather than use, by mistake or curiosity or trying to be helpful – and then you risk to lose track.

  • We want to empower you to build multi user enterprise grade information systems without involving the IT department. We want to give you the power you have with excel but without the limitations.
  • We will enable you to separate “using your solution” from “changing your solution” as is the signum of all robust systems.
  • We will show you have to avoid losing track of a growing solution by using plain English instead of cell references in all your rules.
  • We will show how to incorporate any level of authorization into your system so that you can keep sensitive information in the same solution as open information – and still be safe.
  • We will show you that you can replace not only complex excel spread-sheets but also bulky licensed cots-systems with your own tailored MDriven systems. Doing so will save you licensing fees and make users happier – and put you in the driver seat for improvements.

Now you might think – “I really do love excel and I am not prepared to leave it”. You will not have to leave it! Your MDriven system will work with excel and it will be easy to push and pull data from your MDriven-based-system. You can also easily produce excel-reports from your system.

There are actually only benefits going down the MDriven route – it gives you the abilities of an experienced software developer without the need to learn a lot of technology. We want you to be able to focus on the gist of your solution and leave the tech-stuff to MDriven.

Un-learn how to code

Un-learn how to code. Why would you want to un-learn how to code?

The problem with “coding” is that it is the process of taking a high-level description of information and process, that can be easily understood by people – and translate it into a lower level of description of the exact same information and process so that your machine servers may understand it.

This is all and good – but just as we have issues from language to language translation like distortion and lack of speed – we have the same issues with code and coding.

Further more we often make no separate documentation on what we said to the translator or coder – we just assume the message will be conveyed to the machines and it is enough if they have the definition clear – so that we do not need to hold all those things in mind.

We have wrongly assumed that it is the translator i.e. the coder that keeps the knowledge – but they don’t. Of course, the translator will know what you said an hour ago, yesterday or even last month – but they will have no chance of remembering all the details that went into code a year ago.

We also wrongly assumed that the rules we convey to the coder are timeless – that they will not wither with time. This assumption could have been correct but isn’t because the coder mixes your rules with other requirements that are implied from today’s technology – the modernity – and this age rather fast. Your timeless rules are mixed with short lived technology – and the result does not last.

This is why we must un-learn how to code.

Information technology is the most powerful tool we ever invented – it is too important not to use. It is too important for allowing it to be hard to use. Information technology must be democratized and made available for high level descriptions that humans understand rather than low level code that is error prone and already mixed with things that age.

MDriven is showing the way – everyone will eventually follow – but being first is an opportunity.

Who is MDriven for?

To answer the question who we build MDriven for we created these personas: The developer, The status quo person, The manager, The customer.

Developers do not want to spend excessive time to get one thing done since there are so many different things to improve. Developers tend to fix the things that needs fixing the most to maximize any opportunity for improvement they see– then look around for the next thing in queue.

Then we have a total different kind of person – the status quo person. Developers can find themselves trying to get access to a thing that needs fixing for longer time than it would take applying the fix – when you are in this situation you are probably stuck in discussion with a status quo person.

Status quo persons tend to focus on risk minimization rather than opportunity maximization. They are in love with the current situation or at least they fear the unknown. This kind of person has a strong need for documentation and structure. The status quo person seldom fully understand the developer and vice versa. Depending on how little of the diplomat gene these two persons have inherited there will be conflict.

The third person in this mix is the manager. The manager has a mix of status quo persons and developers to manage and will try to minimize conflict between developers and status quo persons with different strategies. A very common strategy is to set up meetings that effectively becomes the arena for the developer and status quo person’s conflicts.

The forth type of person is the customer. The customer wants an outcome that is reliable as a first priority and improving as a second priority. The manager is responsible for the customer communication and customer satisfaction – this leads to the need to prioritize the developers and status quo persons work.

Problem statement

The manager can easily be fooled into listening too much to the status quo camp since the customer wants reliability as a first priority. This will however lead to losing the customer since the second priority – improvement – is actually more important than what the customer will let on. If the improvement factor is not high enough the customer will eventually stop using the product – and find another way.

IT has changed a lot in the last 30 years – but the organizational nomenclature has not. The IT department is still the IT department. The employees of a typical IT department has however changed and it is common that we have a leniency towards status quo employees rather than developers.

Modern business have many opportunities that could come from applying IT – and it will require developers. But the IT department has become the status quo department.

Business improvement meetings are often too abstract and technical to truly engage the doers in the customer organization – they are also time consuming with an unclear return of investment. 

Not being able to fix business processes issues in the same pace as they are discovered builds employee frustration that leads to shadow IT and un-optimized, un-sanctioned solutions – this in turn may lead to compliance issues with laws and rules that governs the business, and missed opportunities for increased efficiency.

How MDriven helps

The personas described above are all satisfied with an MDriven powered approach to development.

The developer: Fast paced self-documenting changes in a growing domain language (model) makes it easy to continue to make improvements. Developers improve system Gist separated from Modernity and will this way avoid conflicts with the status quo camp of operations.

The status quo person: Well structured UML compliant documentation of systems are always kept up to date and easily reviewed prior to release of updates. Reuse of proven modernity platform ensures a foreseeable operation environment – and pain free deployments.

The Customer: Stable Modernity platforms gives a predictable environment that the business can trust – and at the same time have domain specific discussions with developers that can continuously improve processes in incremental steps. The business can avoid to get stuck in the time consuming requirement gathering processes of the past – since this is now solved as an integrated ongoing discussion with developers. Users in the organization get a stronger commitment to common goals as they are listened to and have impact on the future of the processes in which they work – this enables a self-sharpening organization that will stay competitive.

The Manager: Happy employees and happy customers makes a happy manager. Fewer conflicts between developers and status quo persons will increase productivity in both camps.

The alternative

If we remove the clear separation of Gist and Modernity that MDriven brings, we cannot keep a good separation between status quo persons and developers. This way conflicts will limit developer productivity and in the end this will lead to customer loss. The customer will also be left without a speaking partner that quickly can apply improvements with ensured operations stability.

To AI or not to AI

If we zoom really far out there are only two problems in the world:

  • Figure out what to do (problem formulation)
  • Do it (problem solution)

Modern AI is always in the “Do it” category – because problem formulation requires a will to change – when AI gets there I will update this article.

Most new challenges (99%) we are put in front are in the “Figure our what to do” category. And once we figure it out we will not need AI for 99% of the solutions we come up with since the solutions are linear or straight forward to solve with known tools that have precision and logic on another level than what modern AI has.

If my math is correct we will use AI on 0.01% of challenges we face.

This 0.01% will be insanely helpful though – since it will helps us “Do” things we have never been able to learn computers do before. Almost all of these things boils down to classification of unclassified data (pattern recognition).

If we can get a computer to classify data then we can make it create wonderful music. Create an AI that can take any sound-stream and classify if it is wonderful music or not – then we could leave it with a noise source and come back and check it once in a while – it will have found stuff to classify as wonderful music for sure. With back-propagation it will be very much faster than just waiting for chance. A person could do this too – but not as fast and not as long and not as cheap – and back-propagation is what a person does when she improve her ability by reducing her weaknesses.

It is a revolution!

But only for the 0.01% of the things we do – so stay in school – you will be needed a while longer.

What MDriven does with models is actually pure “Figure out what to do”—stuff (the 99% percent of what problem solving is all about), then we take the models and “Do it” with well known established software strategies that has evolved the last 40 years – in seconds – so that you do not have to. If you use MDriven to reduce the tedious “Do it” phase  you will have more time to think on how to apply AI to your product.

Visual Studio 2019

You can now run MDriven Framework in Visual Studio 2019.

It will work in Community, Professional and Enterprise editions.

The same install will work in VS2019, VS2017 and VS2015. With this release we deprecate earlier VS-versions.

Who is MDriven Framework for? It is for developers that focus on Problem-Formulation rather than coding. The main challenge in digitalization is not writing code – it is problem formulation – to actually understand how you as a developer can help by bringing information technology to any situation.

Problem formulation in a domain that is open for improvement via application of information technology will be a cyclic and hopefully never ending task – you can always improve. To deliver value in this environment without getting caught up in old fashioned coding issues you will find that MDriven is a really good partner.

By allowing you to formulate the problem with models – instead of text – you can bridge the gap between humans and computers in an unprecedented way. The models built with MDriven transforms into the artifacts needed by current technology automatically and in seconds. This way you and your team can stay in the loop of constant problem formulation – not wasting time in realization of a solution for a known problem (traditional coding).

Actually very cool…

Cache Invalidation–a real problem for us all

“There are only two hard problems in Computer Science: cache invalidation, and naming things.”
— Phil Karlton

Caches are all around – when we do a small derived field combining first name and last name in a new attribute complete name it is a cache of sorts. But we may typically think of cache as something our web browser does to avoid sending things over a network. Cache’s are also typically aggregated data derived from other data – data we snapshot at one point in time and then keep in fact and dimension tables for multidimensional cubes, json or xml documents, reduced aggregated sums or the like.  

The main driver for caching is delivery speed by being prepared. Just like when the TV-Chefs say “…to save some time I have already peeled the potatoes”. I stress the concept of being prepared. It is a much more important way of gaining speed than being fast. A TV-Chef might be the world record holder in potato peeling – but peeling the potatoes ahead of time is an enormously much more efficient way to serve them up fast when needed.

Software development is not potato peeling – but the concept of preparation is the same – can we be prepared to deliver a requested result we will be faster when it actually matters.

Being fast when it matters is what performance is all about.  Obviously. 

Being fast when it matters can always be solved by being prepared and being prepared always translates to caching.

If I am correct – why is not everyone caching everything all the time? There are several reasons why developers in situations chose to avoid caching:

  1. Not knowing what to prepare for – or inability to forecast future requests
  2. Potentially wasted resources – considering storage and time to prepare for things that happen seldom or with low gain in being prepared
  3. No easy way to efficiently know when a cached result has gone stale by change in underlying data
  4. The risk and implications of serving up old stale data outweighs the benefits of being fast

For reason 1 & 2 I argue that the thing worthy of caching always is computational expensive to derive and that we must be reasonable certain that this cost is motivated by someone asking for the result while the cache is still valid.

Point 3 is what we mean by Cache Invalidation – how we know when to not trust the current cache anymore and create a new one.

Point 4 is the risk we take when not having a good cache invalidation strategy.

When point 3 or 4 is cited as reason for not caching we understand why cache invalidation is an important problem in computer science.

The easiest and probably the most common way to invalidate a cache – i.e. stating that it is stale and must be refreshed – is time. A Developer may make an assumption that a reader of the cached data will not suffer too much of having a potential 1 hour old value. Given that assumption the cache may be invalidated every hour accepting the risk defined in point 4.

If the thing being cached is something we easily can see when it was updated – like a change time on the original data – then a check if the cache time is earlier than the change time will be a perfect dynamic signal to invalidate the cache and we do not have the problem described in point 3.

In the two cases above we either assume high risk of serving up old stale data, or have a cache that apparently is not very computational expensive to find out if it is invalidated. In both of these situations we do not have a big issue with cache invalidation.

The true hard problem with cache invalidation arise when it is computational expensive to check  if the underlying data has changed after the cache was created – and we cannot or will not accept the risk of serving up old stale data. When this is the case the problem is something that keeps software developers up at night and make them loose sleep.

If software architects had a 100% fool proof cache invalidation scheme that was cheap to implement they would use caching a lot more than they currently do. If they used caching a lot more, super performance would be easier to deliver. It would be easier to divide larger systems into multiple smaller systems that rely on cached common reference data – updated when needed. And they would sleep better at night. Computers would work on refreshing the cashes that actually changed and not on redoing things just because time has passed – saving millions and millions of clock cycles that can be used for better things. Users will get served fresh data faster with less effort. There are no losers to this equation.

All of the reasons above sum up to the conclusion that cache invalidation is a problem that is worth addressing.

Since the problem is worth addressing you would think that IBM, Microsoft, Google, Apple, Facebook, Oracle and the rest of the world is investing heavily in this – right? The surprising answer is no-ish. Yes of course they do caching – but no, they do not offer a 100% fool proof cache invalidation scheme that is cheap to implement and I will tell you why:

  1. They all lack a generic consistent way to discover data change
  2. They all lack a declarative description of the transformation from data to cache
  3. They all lack a way to detect needed subscriptions to data from such a declarative transformation
  4. They all lack a way of discovering data not part of the cache that change in such a way that it now should be part of the cache

Since there is no declarative way to handle information and transformation of information in the portfolios of the big tech companies they are unable to provide solutions that need detailed meta data in order to function. What I mean with “no declarative way to handle information” is that all the large tech companies are stuck in the cul de sac of imperative coding – instead of modeling of information. For more in this read the book Doing effective business by taking control of information

Imperative coding might feel good, might be fun, might come natural to many – but it leads to havoc in trying to do static analysis on what the system actually can do. Havoc that eventually will crash some airplanes, accelerate some cars into walls and blow up some nuclear reactors. But it also make cache invalidation really hard.

If we refrain from archaic imperative coding for describing the system Gist and instead describe the information in UML and the transformation in declarative viewmodels with OCL we actually have no issue at all with cache invalidation. Who would have guessed?

The trick is to on a member level make a note of everything that changes – from what and to what – and when it comes to associations make a note of both ends of the association that is changed – then compare those changes with the exact member instances used by the cache when it was compiled. Voila! This is actually an almost exact persisted replica of the derived member implementations available for many years in MDriven in memory handling of modeled objects – but now for persisted databases with potentially millions of rows.

It is easy to use – it follows the pattern of server side declarative viewmodels and it is consistent. This opens many new doors. Stay tuned for the details on practical details.

Pen testing MDriven Turnkey

Pen testing – or Penetration testing – is something you do in order to find weaknesses of systems before some hacker of the internet finds it for you – and uses it to mess up your day.

Recently a government agency in Sweden performed such a pen test on a system built with MDriven Turnkey. The agency is Finansinspektionen and they report to Ministry of Finance of the Swedish government.

The result: Triple A

One point of concern was that since MDriven Turnkey gallantly handle any kind of potential aggressive text data (like attempts for script tags and sql injections) securely – such values could pose risk to other systems further down the line. We will work with the customer to show how to filter these kind of potential risks.

imagehttps://www.fi.se/en/about-fi/

3 reasons to start with MDriven

1 Let’s do a pre-study

It is very common that an organization wish to understand the opportunities and risks in implementing a new set of business features by software development. This is often done in order to allocate a budget and understand if there is business case. This is done before a development team is allocated or procurement activity is initiated. All these types of efforts done before actual costs are expected may be defined as a “pre-study”.image

The tool-box at hand for the typical business person, for conducting pre-studies is often limited to word processing, spread-sheet or presentations. Sometimes a modeling tool may be used to illustrate requirement dimensions such as use-cases, information models, processes etc.

The hand-over of the pre-study to a development team or a procurement team is very often problematic as the level of detail needed for them to proceed is not possible to obtain given the tooling above. This lack of detail and the limited support for an accurate abstract representation of the feature set in the pre-study will often result in a lengthy dialog with the business stakeholders in an agile fashion deeming the pre-study to be of little or no value.

A more efficient approach than to do a pre-study would be to equip the business representatives with a tool that would allow them to accurately represent any business software by a given set of standard projections. These standard projections may then be used as either base for a procurement document, as a complete architectural document given to a development team or as input to a model executor of choice.

This requirement cube consists of the following views.

•    Information model
•    Declarative view models
•    State machines
•    Interaction model
•    Access control model
•    Correct and relevant test data.

The MDriven tool box allows for you to describe these views in standard UML and can be exported as XMI or XPS.
Challenge yourself and create your next pre-study in MDriven!

2 What (standard) product should we buy?

It is often very tempting to accelerate digitalization efforts in an organization by deciding to buy a standard system, after all – how different can our way of operating our business be from other similar businesses?
One efficient way to establish a base-line requirement to use for the comparison is is to define your organization’s domain language in a structured way in order to compare it with standard systems candidates? Is it ok that ”Work order” is called ”Project” in the chosen system or that a government authority needs to manage “customers” in the (standard) CRM? What can be changed?

image
Instead of the glossy brochures and smooth presentations, challenge the vendors to demonstrate how your domain model may be managed in their system. This will give you clear insights of how well your model might fit in the candidate system.
Another way is to require the information model from the vendor’s system, buy importing parts or the complete model you may use the Autoform feature in MDriven to emulate basic functionality and test how integrations towards your existing application landscape would impact it.
It is important to also reflect of the level of change over time that is needed from the standard system, it will by default represent a master data source for parts of your business. This might be perfectly fine for the areas of your business that might see little or no change, typically regulatory areas such as book-keeping and similar. For areas with higher degree of change one needs to understand that the selected system will control what is possible to evolve during the expected system life expectancy. For areas with higher degree we suggest that you use the MDriven technology in order to support your business in an efficient way for a long time to come.

3 Gist is fine but modernity is dated

It is not uncommon that what finally sets the End of Life (EOL) for a software system is the lack of support for the technology (modernity) that it is realized in. In mature and slowly evolving businesses it might that the only reason for change of a software system is the need for new modernity, not necessary the lack of Gist. One way to handle this situation is to start a project that mainly re-implements the Gist in a new modernity. This will give the system a new supported life for about 3-9 years before the next shift is needed.

image
One other way is to use MDriven with its reverse database and auto form technology – ”the Gist extractor” to re-establish a Gist that may be used to realize a system in one modernity and then move it onto new modernities as they come and go. Thus ensuring a steady system support supply and hence no need for refactoring. Ever. Given this it might be possible to “swap” out an old system overnight and replace it with a new system the day after. Same gist – new modernity. No business impact! This strategy is usually very fast as quite a bit is automated and the rest of the Gist may be done just by replicating the behavior of the old application. We have seen speeds up x100 in comparison to the effort spent implementing the original system.
In the same manner it is possible to “lift and shift” a system from an on-premises environment into the cloud. MDriven can be used to expose REST API in a secure manner for applications lacking API:s still on-prem and thus allowing for a controlled journey to the cloud.

Digitalization for CEO’s

Everyone knows that there are gains to be had by going digital. Exactly what gains and were to start is however not obvious. Advice are plenty but often risky — involving high bets on specific schemes that often enrich the company that gave you the advice in the first place. You know that you will need to do something soon — but you are not prepared to jump high and far when you know from experience that any lasting progress always comes from getting a bit better every day — and by continuing that process. Progress is never acquired — it is always gained by work.

“If you find a perfect COTS-product then you are probably not as unique as you need to be to win”

Most CEO’s has this fear of doing too little and at the same time the fear of choosing the wrong path. To manage the fear you may hire a CIO. Once the CIO is in place the fear moves to the CIO — but it does not remove the fear from the company and the core issues are not handled.

In order to understand the challenges you will be helped by dividing the company’s processes into two areas.

The first area — core business — here you need your company to compete with rivals. In this area you hire personnel that is smart, creative, solves all problems and you do not care that much about prior experience, instead you focus on ability to learn. It makes a lot of sense to ignore exact matched experience if you think that you are unique — where would they have acquired this experience from? You focus on the ability to learn and adapt — since being best is not static — it needs constant sharpening.

It is unlikely that you will find a perfect off-the-shelf information system to license for this area. I say unlikely because if you find a perfect COTS-product then you are probably not as unique as you need to be to win. And even if it is perfect today — will it evolve in the same direction as you need to in order to win tomorrow?

The second area of your business processes are the areas that you must have due to regulations or common business hygiene. The personnel you hire to operate these functions are often hired based on prior experience. Ideally there should be nothing unique in this area — your company should follow the rules and laws — but not over doing it — and never miss on a requirement. In this areas you will benefit from finding COTS products — riding on the knowledge of others on how relevant processes are upheld and streamlined.

What are typical “hygiene” areas? It varies from business to business — but it is not uncommon for manufacturing companies to see human relations, salary, travel expenses and IT-support as NOT core business functions.

Examples of core business functions for a manufacturing company can be supply, demand, goods procurement and research and development.

The core business functions may find highly specialized commercial business support systems per function to solve specific tasks. But with digitalization we aim higher — we want all the departments to share information — and we want to support new or improved ways of working with digital tools so that the talent we employ is not weighed down with mundane administration.

“A self-digitalizing company where employees make themselves more efficient as part of the daily routine”

Today this is often solved with department-owned excel spreadsheets. Spreadsheets are copied and moved around between departments and even if this constitutes information sharing it has several caveats.

  • It is a risk that one part of the organization base decision on outdated excel data.
  • It is common that departments only see part of the information and that they are less able to understand the whole picture.
  • It is not uncommon that queues build up to centrally placed spreadsheets if there are many that needs to write to it.
  • It is not uncommon that yet another copy of the spreadsheet is created to solve a need found in one part of the organization but not understood or communicated to the other parts.
  • It opens up for protectionism between departments — misplaced protection of company assets that makes work harder for your employees and thus more expensive.
  • Spreadsheets cannot have granular access rights so that you can expose from who we buy but not the price — instead we create two spreadsheets and risk that they get out of sync.
  • Every employee can create their own flavor of spreadsheet and that level of freedom does not help the company compete.
  • Spreadsheet break — and we waste time piecing them back together.

You probably can think of even more issues that arise when you rely on excel for critical business data — but the up side is that it is very flexible and the spreadsheets are owned by the business and not by the IT-department.

The fact that the business becomes independent in providing a business support system is important. If they rely on the IT department to find the best solution it would take a lot of energy and fighting spirit to actually get things usable. And since we know that our needs change — chances are that the needs change faster than the delivery speed of the IT department.

People working in IT often need to be convinced that the business need for change is real and not something we made up just to make life harder for IT — and these discussions are both time consuming, exhausting and does not create any business value.

To get a working system in place is also hard to do in one go — in excel you tweak and tweak until you do not need to tweak any more — but the IT-department often requires a complete specification and they will hold you to it; even if it is wrong in some aspect and you find this out before delivery you will still get what you asked for in the first place.

The common critique regarding Excel from information architects is that it is too brittle and does not help to build a Meta understanding of the information the business use. Excel does not require you to name data — and since there is no naming the words used to describe it is often different for everyone that comes in contact with the data.

This is not optimal — you will not know how much efficiency you lose by not having a high level Meta data understanding of your business needs until you get one so that you can compare. But I will argue that it is really important to get this understanding in place in order to see the bigger picture of information use — and once you do you will probably find many synergies and information reuse possibilities that you have not yet taken advantage of in your organization.

It is when you start to see all the data you use and start to understand the dynamics of your information — in what order data is refined and used — it is then you can say that you are on a good path to digitalization of your core processes. And once on this path — you must continue to walk it.

Walking the path of digitalization will put information technology into use a bit more each day in your business. As more and more information support is provided you lessen the burden on your employees. You will notice this as increased productivity. As you get the right digitalization up and running you will notice more consistent results and higher quality of the artifacts used by and delivered from the business.

If you do this right your most talented employees will look further and will see more possibilities. When your departments get together and own the process of getting better information management support each day — you know that you can relax a bit. If you get here you have created a self-digitalizing company where employees make themselves more efficient as part of the daily routine. That is how you want to digitalize your company. No struggle trying to force systems in from above — instead let them grow from below. This way you get acceptance from the employees and you use the very best of their talents to build systems that will help you win.

Establishing this environment is what MDriven.net does.