How Machine Learning Builds Your Applications For You

Or what do user architects have to do with machine learning?

User architects are the users of the system. Even before the system exists, they enter our minds in a vague and blurry sort of way. We imagine these characters that will be the users of the system. Our imaginary and fictional users are then formed into user profiles. We award characteristics to these users. We give them personalities and then engage in dark arts to bring them alive. Or if not bring them to life at least to clear and wash away some of the elements that obscure our view of them.


The dark arts, for us, involve not much more than than building an application that meets the user’s needs. To provide the type of functionality we believe will be useful or even essential. We have a large amount of previous information on what these needs are. We also have much evidence on what features we will incorporate and will give value. Yet this evidence does not guarantee the system will be a perfect, or even good, solution.

To try and devise a more perfect solution we go back to the user and gather their behavior. To begin with this we do this through survey and interview but this gives us just a first reaction. It does not provide the depth of information needed to craft a good user experience. It is too scant to provide a plan for ground breaking features or cutting edge functionality. None the less it is a starting place and allows us to produce that minimal viable product. A product from which a mature application will grow and develop.

In this sense it gets us to the next stage and is the crucial step towards a continually improving system. This first step of getting the users to tell us about the application provides the vocabulary. In this way they have begun to take a role in the architecture of the system. They have told us what their understanding of the language used in the application is.  In an application provides ways of working with data it is critical that we know what the users mean by the term ‘data’. This design at this stage maybe a little mud like but we are able to find truths within it.

Getting the application to tell us what the users want

A long history of providing software gives us a view of the pains and problems users experience. Our inspiration is easing these pains and problems. Asking users what pains and frustrations they encounter brings this into sharp focus. The problems they face occur over and over again, making simple tasks arduous. Too often the systems that seek to ease the burden are over complicated.

All engineers, including software engineers, tend to over engineer. Users can, and do, ask for too many trivial features that clutter their lives with a vengeance. How do we discover what is essential and what is clutter? How do we progress along the path to continuous improvement? Yes we ask our users, but in a different way, we ask them to show us.

An application that works with data and provides analytics should know how users ask questions of that data. It should observe and record how they do it and what result sets they collect. To make the application simpler to use it should be able to predict the questions the users ask. It should be able to suggest which parts of the data they will need and how it is best presented.

Machine learning is an ideal tool to make this happen. It could be done manually but involves observing a lot of queries asked in many different ways. Manually this process would take eons. We do not have the time for this, the application needs to respond much quicker. It also needs to arrive at a position where it adapts quickly. It needs to respond in the length of time it takes describe what it is responding to.


In the past we would involve users in workshops to capture and observe their behaviour. Lab sessions where we watched and recorded what they did and how they did it. This approach to gathering data gave qualitative information but it lacked quantity. Not having enough data gave nothing to back up the insights gleaned from the small samples. Even with the most careful selection of focus groups the risk of a skewed view arose.

Learning by doing it again and again.

The quality of having observed and recorded users is unequivocal. Yet without quantitative data there was no way of ensuring the required balance. Eliciting insights requires the moderation of measuring both quantitative and qualitative data inputs.

The manpower and effort required to capture the data, from observing the user, is expensive and time consuming. Yet, repetitive and time consuming tasks are those that software is so very well suited to. Machine Learning, in particular Deep Learning and Neural Networks, is nourished by vast amounts of data.

Deep Learning(DL) is a type of Machine Learning(ML) and within Deep Learning there are subsets. Deep Neural Networks(DNNs); typically used on tabular datasets. Convolutional Neural Networks(CNNs); typically used on image data. Recurrent Neural Networks(RNNs); typically used on temporal data. The illustration below shows multiple layers within a neural network comprising of neurons and synapses interconnecting them.


Software is able to be useful and take part in its own design then it becomes introspective. Capturing the user’s interaction with the system collects a new corpus of data. Being able to classify sections of the data exposes it to analysis. Enabling users to architect the application through the simple process of using the system. Let us consider how this would work in practice. We will do this by examining the possibilities in a data analysis system.


In a system designed to capture data, blend it, sort it and explore it the steps can be isolated. Capturing the data requires the user to, upload files to the system or plug the system into a source. For this illustration let’s look at how machine learning helps refine the capture process.

Uploading files requires locating them and more often than not adding some descriptive information. The same applies to plugged in data sources, an API will more than likely have a descriptive name or a name and a description.

A problem may occur finding the right files and files with the right content. The descriptive information added at the time of uploading may provide the key. If a user uploads a series of spreadsheets and adds a description (e.g ‘Weekly Sales’) the system is able to perform specific tasks.

At upload the can examine the structure of the documents and count instances of words or patterns. It can detect attributes of style and layout. It can look at types of maths or formulas. It can count occurrences of data types, such as: string or text, integer or number.


Format can reveal other data characteristics such as: float, decimal, text, operator. A series of patterns and sub-patterns can then combine with extracts from the descriptions. Once these combinations become exposed to a classifying process the system has learned something. It has taught itself what a “sales” document might look like. It may then do the same thing with “weekly”, “monthly”, “quarterly” or “annually”.

The same applies to labels such as “sales”, “purchases” or “orders” or “invoices”. Once it has learned this instead of asking for a classification it will be able to predict what the description fits. The application can ask the user “Are looking to work with monthly data?”. The system is then able to look for documents or data sources that contain data structured around monthly intervals.




The more the system examines the data the more it learns and the more it becomes sure of what it has taught itself. The question is who is teaching who? Is the machine learning on its own accord? Yes it is, but it is the user who is teaching it the limits and extent of what data the descriptions apply to. The user is the original source of the language that the classification process draws on.

Extracts of the user inputs, descriptions and a subsequent queries are collected as metadata. This metadata, the data sets uploaded, the queries and the query result sets are the source of the learning process.  Labels are extracted from these descriptions, algorithms written to weight distinct parts. Sets of data have their structures classified as do the result sets and outputs that the queries deliver.

No sooner said than done – courtesy of machine learning

Machine, or deep learning, takes the user input and predicts actions. These predictions refine the interface and this is the origin of changes to the system architecture. This is how data captured from user actions architects the application. It is akin to an evolutionary process in that it delivers continuous improvement. The application gets better and better.

As the application improves its ability to comprehend natural language queries, parts of the interface are candidates for replacement. The select feature for “month”, “quarter”, “year” may get deprecated. The application will learn to provide the relevant summary reports and visualisations in response to; “Get me the last 4 months of sales figures for Europe.”, “What do we need to start ordering within the next 12 weeks?”, “What happens if this price point increases by .5%?”. The system will learn what reports it is expected to deliver, and produce them before they are requested.

To many software engineers this may appear a radical departure from how a system architecture ought to be designed. Computer science for a long time appeared more comfortable ignoring the users. Many thought taking specifications from those who procure the software and then building functionality to deliver the specification was the correct route. Then came a more enlightened view that observing user interaction had some value. Now building the results of user interaction into the design would appear to be an imperative.

The user interaction stored through detailed logging can be collected and stored. Machine learning learns from examples and experience instead instead of from a hard coded rule. The possibility to process massive amounts of user input throws open new doors. With the large amounts of ever growing data, machine learning would have something to learn from.

Software creating itself has been muted, and artificial intelligence hints that this option is, to an extent, now possible. As it stands now software is capable of providing some tools to improve itself. One of the most obvious of these is automated testing. Another is tracking how the user uses the application.

Machine learning transforms the users of an application into the architects of the application which results in them being User Architects. It is really the user data, captured by their using the application, that architects the application. In this sense it may be better to think of them as Data Architects given that we are considering an application where the prime task is working with data.

Deep learning is able to play a significant role in writing software applications and reveal new methodologies. It will turn the whole notion of writing software from a mainly engineering discipline closer to one that is a natural science. An simple illustration of how both machine learning and deep learning aid this transformation is given below.

Example of how machine and deep learning are used to write software.

Programming functions

For the techies.

A fundamental part of programming is the writing of functions this follows the process of:

  • set a specification for a function
  • implement that function to meet the specification.



Machine learning and functions

Machine learning allows us to:

  • input examples of (x,y) pairs
  • guess the function y = f(x)
  • run an algorithm to learn from the examples
  • comparing the output of the function to the expected result
  • modify the function and or the algorithm.

Where machine learning is useful to writing , in this instance, is the ability to very quickly look through a lot of examples. It gives us data to test if our “guess” is true. It could also provide us with a generalisation such as x/y or the constant φ = 1.61803 39887 or better yet provide a rule for our function such as xn = xn-1 + xn-2

A good example of how Google engineers use machine learning in programming is covered by Alek Icev, Test Engineering Manager, here. This example is interesting because it looks at using machine learning to improve the algorithm used by the search code as Icev points out, in “the real online world where we want to give answers (predictions) to our users in milliseconds and ask the question how are we going to design automated tests …  embedded into a live online prediction system. The environment is pretty agile and dynamic, the code is being changed every hour, you want your tests to run on 24/7 basis …”

Deep learning and developing functions

“A neural network with a hidden layer has universality: given enough hidden units, it can approximate any function. It’s true, essentially, because the hidden layer can be used as a lookup table.” **

Supervised learning is the most frequently used technique for training artificial neural networks and decision trees. In artificial neural networks the classifier seeks to identify errors within the network and then adjust the network to reduce those errors.

Neural networks are comprised of neurons and the connections between them. A multi-layer, feedforward, backpropagation neural network is composed of

  1. an input layer of nodes,
  2.  one or more intermediate (hidden) layers of nodes, and
  3. an output layer of nodes.

Weight values are associated with individual nodes and vectors. Weights are also known as biases. The values of the weight determines the relationship between input data and output data. Weight values are established through training where the network learns properties from typical input data characteristics.


The algorithm type chosen for this example uses backwards propagation of error (Backpropagation) and unlike the straightforward machine learning example given above the direction used to compute the error vectors (δ) is backwards: starting from the final layer.

The network makes assumptions based on an algorithm, the supervisor provides the network with the answers. The network then compares the assumptions and the answers and makes adjustments according to its errors. In other words it involves comparing the output a network produces with the output it was meant to produce. The difference between these two outputs is used to modify the weights of the connections between the units in the network.


Writing a function with deep learning


Deep learning is able to take example x,y pairs and forms a representation of them at several levels of abstraction to produce a function that generalises well for a novel x. For example if we have a function f(x) 

For any function there is a neural network for every input of x, the value ƒ(x) is output from that network. We could start with x=1 and f(x )=1, this matches our training data but when we input x = 2 and f(x) =5 it fails as the training data says x = 3.  Single inputs and layers would not be able to predict the function we are seeking. It would be different if our training data is very clean say 2, 3, 5, 8, 13, 21, 34 as in input


We can have multiple inputs and seek to create a single output, the output we are looking for is a function to predict what y would be when x =n.  From our example pairs (where we know what x and y are) X1[2, 3, 1.5],X2[3, 5, 1.66],X3[n,n,n]



output ƒ(x) could be expressed by the function fibonacci-formula-phi 

In an OO language such as Java the function could be written as could be written as;

In a functional language such as Clojure it could be written as ;


One of the many advantages of using deep learning to produce a function is that it would also work if the function has many inputs, ƒ=ƒ(x1 , … ,xm) and many outputs.




In a follow up to this post I will explain the mechanics of machine learning. Especially Nueral Networks and Deep Learning described by Michael A. Nielsen as ‘one of the most beautiful programming paradigms ever invented.’*

This post, excluding the last technical section, is repeated on Medium

All images in this post are by photographer Les Stone  taken during 20 years of photographing Vodou ritual in Haiti © 2013 Les Stone. They are published elsewhere on the internet and I have shared them here (without the author’s permission to date) to reflect my personal admiration for his work. BR
Michael A. Nielsen , “Neural Networks and Deep Learning”, Determination Press, 2015
** Christopher Olah

Leave a Reply

%d bloggers like this: