The construction industry may be behind on adopting artificial intelligence and machine learning, but their transformational potential in our sector should not be underestimated, says Adam Ward from Space Group and BIM Technologies

It comes as no surprise that the high tech, telecoms and financial services sectors are the leading early adopters of machine learning and artificial intelligence. After all, these industries are well-known for their willingness to invest in and integrate new technologies, in order to gain competitive advantage and achieve internal process efficiencies.

But it’s painfully noticeable that the construction industry is missing from the list. Why is it taking so long for construction to catch up with advanced technology? As an industry, we amass huge amounts of data – but it’s only now that we are finally starting to understand what we might do with it and where it might take us.

Take, for example, machine learning: this technology might allow us to generate progressive and independent thinking within the machines and computers that drive our industry. In other words, machines are learning to think for themselves, but it’s only by feeding them vast quantities of data that we can enable them to learn. From there, computers might analyse that data, to predict patterns and behaviours within occupied buildings and help designers to make more informed choices in the design and prototype phases of a project.

Machine learning primer

Machine learning is a branch of the wider theory and development of computer systems known as artificial intelligence, or AI. It has many potential applications in construction for the design and operational stages of a project, but it is perhaps building operations that might be our best target. In effect, it’s about giving a building a brain.

Machines are very good at consuming and analysing large amounts of seemingly unrelated data and finding patterns in the chaos. Once sufficient quantities of data have been collected and cleaned, they can then be used for training the machine learning algorithms that enable building systems to react to detected patterns and make informed decisions. These huge volumes can also drive a core set of algorithms that use neural networks with many hidden layers – sometimes referred to as deep neural networks or DNNs – to assist in learning, classification and prediction.

Learning curves

To put this issue in a more everyday context, we might look at child development. When a child is born, it doesn’t know how to walk, talk, or feed itself. These are traits that are learnt, usually through repetitive patterns that the human brain processes and interprets as common functions.

About the author

Adam Ward is a digital construction professional with close to 20 years’ experience. As Technology Director at Space Group, he is responsible for developing the processes and technologies that Space Group uses to solve complex problems in the construction industry.

Machine learning operates in a very similar way to the neurological processes of the human brain. By looking at patterns within data, it can make informed decisions on what will most likely happen next and adjust accordingly.

When a child eats sugar, an electrical signal is fired to the brain that says ‘I like this’. Over time, when a child eats more sugar, this emotion is triggered again, and over time, a neurological connection becomes burnt into the brain. The brain has essentially been ‘trained’.

This is very similar, broadly speaking, to how machine learning and neural networks work. The main difference is that a machine gathers information from large existing data sets much quicker than a human can.

Another similar example: If a computer programme is given thousands of images of frogs, and it is told thousands of times ‘This is a frog’, it will learn to recognise other frog pictures or objects that look like a frog. This is why companies such as Google, that have already collected vast amounts of data, are leading the way in machine learning.

In the same way, if a computer programme sees thousands of architects selecting a particular type of door handle – for use on a particular door type, in a particular building type, in a particular country – it can use this knowledge to make future recommendations to architects automatically about which door handle they might select. It will also know why it’s making that recommendation, because its selection is based on the choices of previous architects.

This is why many software companies are transitioning their platforms to the cloud, so that they can start to combine information from many different data sources, and start to connect the dots.

Smart buildings need smart data

The development of smart buildings is on the rise, due to the availability of the cheap sensors and connected devices that make up the Internet of Things (IoT). These are becoming commonplace in our homes, with many everyday gadgets and appliances now available as smart products. Likewise, many buildings already have sensors that measure light, heat, energy consumption, footfall, movement, capacity and more. Many are part of proprietary, closed systems that don’t talk to each other, but this is changing as the systems become increasingly able to share data in open standard formats or via application programming interfaces (APIs).

Once data is connected, we can layer on top those machine learning algorithms that allow us to predict the behaviours of a building and its occupants, not only for efficiencies in operating and maintenance, but also to inform the way we design future buildings to optimise space and flow and select more appropriate building materials.

Equally, many clients are now asking to know when an asset is likely to break down so it can be fixed or replaced ahead of time. Once we have enough data, we can use a linear regression algorithm on the data to predict asset failure. Machine learning is also used in more everyday tasks, such as providing more relevant search results, by using previous user search trends to influence results. This technique has been used for years by companies such as Facebook or Netflix, which use machine learning to push news stories and recommendations to your feed.

A similar approach is now seen with BIM object provider bimstore, to provide manufacturers with insights into not only what buyers have selected, but what factors may have influenced their decision.

Early days

The computers in a modern car or aeroplane collect and analyse vast amounts of data every second. Companies like Tesla use this technology to analyse our driving patterns, diagnose problems before they happen and make recommendations on when to change the tyres – and which tyres to select – based on predicted driving patterns, distance travelled and road surfaces. That’s just the tip of the iceberg for machine learning capabilities. Construction is still way behind.

With many buildings now been designed, delivered and operated using digital processes and technologies such as BIM, machine learning is also starting to appear in many of the tools that architects and engineers use daily. Companies such as Autodesk are investing heavily and their technology previews such as Project Dreamcatcher and Project Fractal hint at a future where machines are able to ‘design’.

For the past few years, there has been a strong message that in the future we will not tell the computer ‘what to do’, as we do now. Instead, we will tell it ‘what we want to achieve’ and leave it to find the optimal solution and design.

Autodesk is also looking at how machine learning might make construction sites a safer place. It has recently acquired, a company that has developed intelligent video and photo-tagging software to enable cameras to highlight potential hazards and issues on a construction site.

Rapid interest in, and development around, machine learning has been created by a perfect storm of emerging technologies across all industries. Data is becoming more accessible through cheap IoT devices, open standards and public APIs. These allow devices and services to talk to each other. This is then combined with ease of access to cloud storage and infinite computing, where we can collect, store and analyse vast amounts of data. And the more data we can process in this way, the more accurate machine learning will become.

The new oil

The insight we can derive from data, and the patterns it reveals in our actions, selections, behaviours and choices, combine to bestow great power on those companies that can own, control and interpret it. Data has become ‘the new oil’ that drives growth for organisations such as Facebook or Google that are competing for your personal information.

With this great power comes high risk, which is why the new rules known as General Data Protection Regulation, or GDPR, are being introduced. Companies must also adjust to new threats. In digital construction, we must contend with specific standards such as PAS 1192-5:2015.

New power, new opportunities, new risks and new threats: they are all changing the way our industry operates. In others, they are already combining to become the ‘new normal’.

If you enjoyed this article, subscribe to AEC Magazine for FREE