Working with REST– a quick way to get Json example turned into model

When you work with REST-services you will consume a lot of structured results in the form of Json data.

Reading the Json specification to create a matching model – attribute by attribute, relation by relation is tiresome.

To help you with this consider the new function “add attributes, associations and classes from clipboard json ”


With this function you may copy some output – like the sample below that shows Spotify categories – and pour it into model:


Using the new MDrivenDesigner function I get this :


Here is a small video:


The power of text

UML is in its core a graphical language and as such it is not easily manipulated by the most powerful tool in the digitalized world today: the text editor.

The text editor is a concept with thousands of implementations. Software developers tend to grow deep relationships with their text editor of choice.

The power of text stems from the well established standardization of the alphabet (allowed symbols) and the power of copy paste that can move large texts from one text editor implementation to another and never fail. This ability creates a deep trust in text as being an information carrier that will never let you down. When something gains that kind of trust you are in love – and love will conquer all (Lionel Richie).

Can we tap in to the power of text editing tools and allow for them to be used in MDriven Designer?

There are many aspects to consider going down this road. In the pure technical sense we already have a text representation of the model in MDriven Designer as the file format is XML based. When you save as ecomdl all files are XML and this important to let GIT, SVN and other source code repositories to merge and diff easily. When saving as modlr the file is actually a zip archive of all the files in ecomdl.

The problem with the file XML format we use is that it is not intended for human interactions – it has generated unique identities that humans hate and computers love. Also XML requires translation of important characters like < and > and this mess up the readability of OCL expressions in the textual representation a lot. The unique identities tie the text to a specific model – and one of the core strengths of textual representation is that it is movable from one context to another with the ease of copy and paste. The GUIDS in the XML-format mess this ability up too much to be an acceptable solution.

We need a format for structured information that does not require translation of important characters –> JSon.

We need a good way to describe hierarchies of structured information that can build up our JSon documents –> ViewModels.

We need a way to import such structures and merge them with existing data based on a human readable identifier –> hmm, here lies the work to be done.

From our earlier endeavors we have successfully worked with import of Tab-separated data based on ViewModel definitions. In the case of tab-separated data the information is always tabular – and now we need hierarchical information so that are not entirely comparable – but what we found by working with tabular data is that treating the first column as key, and look up existing object based on this key or possibly create a new object, solved the data merging problem very intuitively.

The thing with hierarchical data is that keys are not globally unique – they are unique only within their hierarchical branch (the name of an attribute is unique in the class – but other classes may have an attribute with the same name). The context sensitive key is crucial to what we need to do – it will allow us to have keys that are meaningful to the human reader and remove the need for the globally unique identifier that we technically use. When we remove the technical keys – and trust the hierarchical context and a human readable key as maybe a name attribute of an object – the information becomes self sufficient. When we have self sufficient snippets of data in text format our information reach some level of immortality – and can make its journey through eternity (Celine Dion).

Even if our data is self sufficient – it may expect certain things from its environment. For example in a textual representation of a class – this class has an owning package reference to where it belongs. The Class does not own the package – it is the package that own the Class – but if we do not state what package the Class belongs to within the textual format – then the Class will have no package and that is not valid. In this case we could argue that you must choose a package before importing Classes so that we know where to put them. But this strict approach may work for root level objects – but it is a harder requirement to fulfill for objects deeper down in a hierarchy. Lets say that a Class owns possibly many ClassActions, and that a ClassAction can belong to a MenuGroup. If the MenuGroup exists in the environment we are pasted into we want it to use this. But must we demand that the MenuGroup exists prior to importing the data?  If we are not careful here we will defeat the purpose of free flowing information by imposing requirements for merging – this is exactly the situation we want to avoid. This problem might not have a catch all solution – we may need to resort to some kind of clean up rules to verify consistency and delete and repair after an attempted merge.

This is the same problem that always pops up when having heaps of related information: You may want to segment the information – but where should be draw the lines to divide by? The problem is hard to solve perfectly since it depends on perspective – and the perspectives shifts – and people have a tendency to just think that multiple perspectives are equally ok and thus not providing any clues to where it is best to divide the information.

When you do not have enough information to solve a problem you can either do nothing and wait – or – as we at MDriven ALWAYS do – take a guess and move forward. In fact our approach to never stand still and wait is core to our take on development. We gave this approach a name : “Provocative Development”. What we have found is that taking a guess and moving forward is a very good way to get new information. Sometimes we immediately get feedback that “NO that way is wrong – it would be better this way”. And this is a lot more helpful than silence – at least when the investment needed to move – and also move back – is low. And that is also what MDriven is all about : Cost of development is lower or equal to cost of discussing or waiting.

Since MDriven tools radically change old truths on what it is that is expensive we have a playfield that is not really comparable to traditional methods.

Having said all that I want to introduce the experimental Json-text editing in MDriven Designer.

Right-click on a Class/Extras/Experimental Edit with Json:


Try this out – you can copy the Json to a Text editor you like (or use the simple editor provided) and paste it back and Apply.

The “Merge in Apply” checkbox explained: Consider you have a Json with only one attribute defined “Name” : “ThisIsANewAttribute”. Should this mean that all other existing attributes should be removed? This is the default interpretation – since the Json defines only 1 attribute the others will be removed. But if you check the “Merge in Apply” – the existing attributes are kept and while adding the one(s) from Json.

You can use this way of editing to make mass-edits that a Text-editor is better at than the setting individual properties in the MDriven Designer. You can also use this to easily copy paste constructs between models.

The button “Re-Create Remove empties” explained: In order for you to know what Json properties that are valid we pour them all out – but if you want to have a minimal Json to share with someone the empty fields can be skipped.

It is always the first attribute in the Json objects that is the key. In this case “Name” for the class, and also “Name” for attributes. If you change the value of a key – the Json object is treated as a new object (possibly deleting the old object on apply based on the Merge-setting).

You can reach the “/Extras/Experimental Edit with Json” command for Classes, ViewModels and StateMachines. You can also reach it for Diagrams it is treated differently and is named “Export/Import all on diagram as Json”:


This will look for what you have placed on the diagram and follow ViewModels and Classes to give you the complete Json representation of the model described. This complete definition may be changed and applied to the same MDriven Designer instance or another.

Try this out and give us feedback and we will adopt and make sure that we somehow bring the power of text to MDriven Designer.


MDriven has a clear separation between the application tier and the persistence tier. These tiers are commonly put on different network hosts. And when they are on different hosts we need to communicate between them over the network.

This network communication will include request for:

  • Searches in persistence storage
  • Access to object content of specific objects
  • Finding if anything has changed since last
  • Updates of changed objects
  • Group fetch given ViewModel knowledge

These different needs has different details in their requests and responses – and it all boils down to an api that use a strongly typed object graph of data.

20 years ago we might have used Remote Procedure Calls to implement this, or maybe DCOM, but then Microsoft came up with Windows Communication Foundation – WCF. The good thing about WCF was that it was very configurable – meaning that you had the basic need set up – the api to communicate over – and then you could use configuration files to control the technical aspects of HOW the transport from point A to B should be done.

WCF sounded great and we started to use that 2009. It allowed us to have 1 implementation and still be flexible to let you decide if the transport should be over TCP or HTTP or HTTPS and if the traffic should be encrypted etc.

Now it is almost 2019 and .net Standard and .net Core are hot topics. The world has consolidated around REST style communication over HTTPS and no one really thinks very much on the need for something else. Why bother add configurability to something that always will be the same? This is probably the reason why Microsoft now ditches WCF and does not move it along into .net Standard whole heartedly (you can call WCF services, but you cannot implement a WCF service with .net Standard 2.03).

MDriven now needs a new way to communicate and that way is WebAPI over HTTPS – this is where the industry is today and we gladly follow.

But the technical aspect of communicating from point A to B is not very exciting. As long as it is secure and not slow – it can be anything.

The interesting part is really how we fold down the object-graph we need to send into to a transportable stream. This is called SERIALIZATION.

The standard way to serialize parameters to and answers from WebAPI services is JSON, but there are also other hot implementations on how to serialize data like Googles protobuf.

When implementing MDrivens new serialization format we had to make the choice on what serialization format to use.

We did not choose JSON and this is why:

  • Data type precision loss – byte, int32, int64 all translate to Integer – that then desterilizes back to int64 (same problem with float types). This causes a lot of problems in downstream handling of data – the types are there for a reason and we need to keep the precision. We could invent wrapper objects to keep type fidelity but that would defy the purpose of JSON.

We did not choose protobuf and this is why:

  • Limited support for untyped transport – List<object> is not handled even if object is a well known type (like int or datetime). This is problematic for us since we follow the UML model that you have created – and for us a class has a list of attributes that can be of any type – as long as you have a AttributePersistenceMapper that knows how to store it. We could treat the transport format as the persistence format – but today we defer persistence knowledge to the persistence tier so we need to be able to “just get the object content over the wire” so that we can interpret how a chosen persistence mapper should persist the values.

The thing that is left is to use the DataContractSerializer – this is actually the same serializer that is used by WCF – so as it turns out everything change and still stay the same.

To use WebAPI communication instead of WCF you use the new PersistenceMapperWEBAPIClient (MDriven.Net.Http.dll) instead of PersistenceMapperWCFClient. And on the server you subclass the public abstract class MDrivenPersistenceController<T> : ApiController (MDriven.Persistence.WebApi.dll).