The first day ‘Spot’ entered Foster + Partners’ Riverside Studio, someone ran to hug it! Spot’s resemblance to the canine counterpart inspires people to engage with the robot as if it were a living being. But, beyond the novelty of its appeal, the Applied R+D (Applied Research + Development) team’s initial encounter with Spot revealed an untapped potential for its application within architecture, engineering, construction, and operations (AECO) industries. This led to Foster + Partners’ decision to join American engineering and robotics company Boston Dynamics’ Early Adopter Program for Spot last year.
As the only architecture practice to be a part of the initiative, Foster + Partners have been investigating how Spot’s skills - the agile robot can climb stairs and traverse rough terrain with ease – lend themselves as a tool to capture and monitor progression on-site. Its extraordinary capabilities make Spot an invaluable resource as we attempt to rethink and reinvent our construction processes.
Foster + Partners’ Applied R+D group have been working alongside Boston Dynamics to experiment with Spot in dynamic environments such as construction sites; capturing changes regularly, and assessing the robot’s capacity to facilitate the comparison between ‘as-designed’ models against ‘as-built’ realities. We wanted to use the robot’s ability to perform consistent, semi-autonomous scans of construction sites to close the gap between the digital and physical states of a building which can often prove disparate.
During our investigations, Spot has been used on a range of sites. Starting with the Foster + Partners’ London campus, the robot dog was used to consistently monitor and capture on-site progress during recent renovations. The Applied R+D team then assessed the results of Spot’s expeditions to see how they could create faster and more adaptive design-to-construction cycles.
Constant and quick checks to monitor accuracy and timeframes, or how well we conform to set deadlines, can be instrumental in integrating design and construction, allowing for further flexibility. Additionally, we have been investigating how Spot’s scans could be of use not only during the design and construction process but also when the building is operational.
Spot’s second site outing was to our Battersea Roof Gardens mixed-use project – part of the third phase of the Battersea Power Station development. As a testbed, we created a map to roughly set-up the “missions” – routes with automated scans set in particular points of the walkthrough - Spot needed to follow on-site in order to scan certain areas and capture specific data. Returning to the site on a weekly basis allowed Spot to rerun the same missions with the process yielding a sequence of highly comparable, and consistent, models.
These repeated scans of construction sites, or “construction twins”, are four-dimensional digital representations showing how a construction site changes over time. The robot’s ability to repeatedly and effortlessly complete routine scans in a dynamic environment proved invaluable not only in terms of consistency but also in the large amount of high-quality data collected.
Through this process, we gathered a sequence of scans that could help us track the project’s progress against project timeframes as well as facilitate regular comparisons against the Building Information Model (BIM). The scans can help ensure that rapid, and most importantly accurate, changes to the newly-designed system can be made to accommodate the differences captured by the scans – all in a matter of days. This could result in critical savings both in terms of time and money.
As previously mentioned, Spot’s abilities were also put into practice at Foster + Partners’ London campus, where it continued the work of building a construction twin by tracking the progress of renovations undertaken in one of our design studios. In addition to exploring the idea of four-dimensional scanning and its relation to spaces that change over time for the purpose of creating a construction twin, we were also able to explore it in a post-occupancy scenario.
We conducted these four-dimensional scans in our Hub: a flexible work and social space with a wide range of uses and constantly changing furniture layout. Scanning spaces such as this at regular intervals allows us to assemble a detailed timeline of the space. With such post-occupancy twins at our disposal, we can create digital representations of how spaces are used over time.
By combining temporal and spatial information with data from sensors that read environmental conditions and occupancy we can construct intricate models of how people, furnishings and environmental conditions interact to form a comprehensive whole. This, in turn, can help us to operate our premises more efficiently and anticipate how new designs will perform.
Spot offers two main benefits when it comes to capturing site data from occupied spaces. For one, it offers the ability to survey locations that may not be well-suited for access by people, whether at height, on difficult terrain, or in extremes of temperature. Spot is also able to work in uncontrolled environments with people present. Neither Spot nor any human occupants need specialised training to share the space as long as some simple protocols regarding safety distances are followed. These two qualities make it particularly well suited to our interests in construction and reality twins.
That being said, Spot is rather conspicuous in office settings during working hours. It turns heads wherever it goes, and the gawking can be very distracting in contexts where focused work is required. Additionally, it is a rather noisy walker. It is not excessively loud, but nor does it pad along quietly as a living dog does. We found - particularly for our work on post-occupancy twins - that it was best to conduct our auto-walk missions outside of office hours. This way, changes to furnishings and spatial layouts continue to be captured without interruption to occupant work and interaction.
Increasing exposure to robotics in domestic and occupational settings will likely reduce the novelty of robotic presence in the future, possibly to the point where it is unremarkable. This, along with improvements to the mechanics of movement through hardware and software innovation, will likely make robots of all types much less conspicuous, both in terms of acoustics but also people’s judgements about safety, privacy, and potential for interaction. Until then, we have discovered that in settings where focus and calm are expected it is best to use Spot for reality capture outside of office working hours.
One of our main objectives for Spot was to investigate the viability of an automated end-to-end workflow for capturing three-dimensional model data for digital twins. Typically, when the practice didn’t have Spot at its disposal, gathering data in-situ presented some challenges. Although each site is different, there are three main tasks in common which should be conducted when collecting data for digital twins: scanning, registration, and reporting.
Traditionally this step is considered time-consuming but requires minimal manual labour. The person responsible for the scan would study the space and decide on specific points to take the scans from; taking into consideration accessibility, maximum exposure, and coverage of the whole area. Once this is done, the scanning process would commence. This would involve the manoeuvring of bulky - and often expensive - scanning equipment around the site to the chosen points. The process is manageable when the requirement is to do this once, but it becomes tedious and laborious if it needs to be undertaken consistently every couple of days.
The scans collected on-site need post-processing. Most scanners on the market are not equipped with the means to store their position within the site and relative to other scans – none of the scans are geolocated in relation to the site or to each other. This means that when all scans are merged, they are not aligned. From here another time-consuming process of aligning the scans ensues. There are a lot of software packages on the market that attempt to automate this process, but it is highly dependent on how well the scan locations were originally planned.
A balance between the number of scans and overlapping coverage needs to be struck. Undertaking more scans than needed leads to longer scanning times on-site, longer processing times and larger file sizes to digitally archive. On the other hand, if fewer scans than needed are taken the process to automatically align them won't find enough overlap and common markers between the scans, resulting in necessary manual intervention to start finding and flagging common planes and landmarks between each pair of scans.
The purpose of the scans in the case of construction twins is to compare the point cloud data of the space (i.e., a set of data points in three-dimensional space, representing where the scanner’s laser beam hit an obstacle in space, millions of those points result in a three-dimensional digital reconstruction of the physical space) to the design intent represented in a BIM model. Then, work out and report whether the site is progressing according to plan or not. Laser scanners provide great capturing accuracy for this task, sometimes down to almost two millimetres. But again, in this case, a person needs to sit down and compare both. Yet another time-consuming process that is susceptible to human error.
In the envisioned scenario where such scans will be retrieved on a regular basis, this three-part process becomes extremely inefficient. If the time elapsed between scans is a couple of days, the time between scanning and reporting in this traditional process would become a bottleneck; scans could be collected faster than we could process and make sense of them. So how could Spot resolve, or alleviate, this?
Spot resolves a lot of problems with regards to the scanning stage; the most significant among them being the issue of consistent repetition. For Spot to start scanning a space, a user needs to first use the controller and drive Spot around the space to record a "walk".
The beginning of the walk is identified by a fiducial marker – similar in shape to QR codes scanned by mobile phones – placed in Spot's line of sight. During this initial walk, Spot will use its cameras and sensors to record landmarks and features in the space to remember the route and "waypoints" it is being driven through. The user could choose to make Spot perform a certain action at set points (action points). Actions vary depending on the payloads mounted on Spot. For example, the robot can use its robotic arms to open a door or move its limbs to stand in a specific pose. In this instance, the most common action used was to initiate a scan and wait for it to finish.
After the initial walk has been recorded, a user could start using Spot's autonomous “auto-walk” feature to replay it. By placing Spot in front of the fiducial marker that marked the beginning of the recorded walk, Spot would prompt the user to pick from recorded missions that started from the same point.
Spot will attempt to follow the recorded path. Even though a construction site is ever-changing, Spot manages to adapt through its capability to bypass obstacles in its set route. If something, or someone, is blocking its path it will attempt to navigate around it, whilst maintaining a safe distance. Recorded actions will be taken at the positions they were previously set at. Meanwhile, a complete blockage will lead to prompts for the operator to choose how to proceed, whether to skip the scan or continue from where Spot is.
Spot can store multiple routes, which is a very useful feature. This enables the pre-recording of multiple paths for different zones within a floor plan, or within different projects. The handler can then replay these missions on-demand as many times as they wish. Path planning is as important here as it is in the traditional approach, but Spot's ability to adapt to changes makes it resilient to constant changes on the construction site.
One of the things that can impede Spot’s capability to scan large spaces and complete large missions can be its battery capacity. This is an issue that Boston Dynamics have tried to address by allowing the user to swap batteries without interrupting the robot’s operation and thus not affecting localisation and the scanning workflow. The potential for Spot to charge autonomously via a charging dock is also one of the new features that may help resolve this.
As opposed to the problems highlighted in performing the registration traditionally, Spot embeds meta-data with each scan it takes. Part of this data indicates the position of the scan calculated in relation to the space. Localisation is computed based on different sources, forward kinematics in relation to the position where Spot was booted, visual and geometric features collected from the space around the robot. Of course, there are also challenges to consider. For example, despite the fact that Spot is adept at dealing with localisation by combining different techniques there are still unresolved issues, particularly concerning some materials (e.g., glass and mirrors) that require the handler’s attention.
Loading the scans digitally results in a great starting point, as all the scans are processed and aligned relative to each other. This is a major advantage over the manual scanning process both in terms of speed and complexity.
In and of itself, Spot doesn't offer anything to aid in this step compared to the way it is traditionally executed. Nonetheless, by decreasing the time required for the scanning, it provides more time to focus on reporting and comparing the point cloud data to the BIM models. We didn't want the automation to stop at this point though, and that's why we decided to investigate whether there were any offerings on the market that could aid in this step, before investing time in-house to resolve this problem.
This is how we were introduced to the Avvir Reality Analysis Platform. Avvir delivers the only hardware agnostic platform that leverages a variety of algorithms to – amongst other things – automatically compare point clouds to BIM models and generate progress reports and critical insights. This key asset is the final step to offering an end-to-end automated workflow.
However, it is important to bear in mind that there are challenges for operating robots on-site. These include the ability to maintain correct localisation within the space. Spot attempts to resolve this by combining different techniques, such as localisation based on sensors. Charge capacity and swapping batteries without affecting the workflow is also a problem. Both are challenges that were tackled by Boston Dynamics. Another ongoing issue is on-site safety and maintaining autonomy.
The future success of the use of robotics on-site depends on multiple factors, such as the ability to navigate unstructured environments. Spot shows great promise in this regard, with its capability to balance, navigate difficult terrain, avoid obstacles, and even pick itself up after falling. This comprises what Boston Dynamics identifies as “athletic intelligence”.
Nevertheless, future advancements in fields such as computer vision, localisation and machine learning will accelerate adoption in a wide range of fields. A good example is the use of robotics for operations like welding where accuracy is important. While robots can be accurate to sub-millimetre level in a controlled factory environment, on-site - where surroundings are in a constant state of change - robots face a massive challenge to calibrate their location in real-time.
Research labs are starting to produce working prototypes and solutions to overcome this accuracy problem. Human-computer interaction research, in fields like voice and gesture recognition that we see in smart speakers, will also help allow robots to work hand in hand with human workers. They could become companions on some tasks without the need for prior task planning or a tablet interface.
Our work with Spot attempts to better understand what happens in the buildings we design, both as they are assembled and once they have been occupied. The promise of digital twins requires processing regular updates from the physical world – not least the fabric of the building and its constituent spaces, furnishings, and infrastructure. Automated reality capture offers the ability to regularly update this critical source of data.
Permanently installed sensors are a good option for capturing certain types of building data such as air quality, acoustics, occupancy. They are relatively inexpensive, inconspicuous, and can report at frequent intervals. However, for the use case of capturing changes in spatial arrangements at regular intervals, mobile scanning is a much better proposition. Good quality scanning hardware is expensive. So, rather than install scanners ubiquitously in a building it is far more sensible to have a single, high-quality scanner and to move that through the spaces to be recorded. Robots – and Spot in particular – offer a very good platform for doing just that.
Editors: Tom Wright and Hiba Alobaydi
20 December 2021
Martha Tsigkari, Adam Davis, Khaled El-Ashry, Sherif Tarabishy and Anders Rod
Architects Martha Tsigkari, Adam Davis, Khaled El-Ashry, Sherif Tarabishy and Anders Rod are members of the Applied R+D (Applied Research + Development) group at Foster + Partners, a team known for their pioneering use of technology in order to push the limits of what is possible in architecture and engineering. Applied R+D is focused on pursuing state of the art research in different fields from complex geometry to machine learning, and applying its findings to real world design challenges.