All Eyes And Ears: Exploring the Use of Technology in Securing Crowds

The following article has been provided by external source. The Government of Canada and Royal Military College Saint-Jean are not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Agnes E.Venema “Mihai Viteazul” National Intelligence Academy, Bucharest, Romania; University of Malta.

 

An Introduction to Crowded Spaces

Many people are able to describe a place they feel is crowded, but providing a comprehensive definition of a crowded space remains difficult. Added to this difficulty is the cultural dimension which in part influences our experience of what is crowded (Whiting & Nakos, 2008, p. 9). Despite this, many crowded spaces do form part of risk assessments and have been a security concern requiring special attention, and in some cases active defence. The UK Home Office, for example, produced a guidance on crowded places and counter-terrorism in partnership with the Department for Communities and Local Government aimed specifically at providing advice on how to limit the vulnerability of crowded spaces to terrorist attacks and mitigate the human cost in case an attack does take place (Crowded Places: The Planning System and Counter-Terrorism, 2012).

While crowded spaces are difficult to define, they do share some common elements. A crowded space needs to be publically accessible. However, this does not necessarily mean free admission (Lu et al., 2010, p. 5) or uncontrolled access (Coaffee et al., 2008, p. 104). It can be both a space which is publically or privately owned (Németh, 2009, p. 2464) but a key identifying feature is that it needs to be predictably accessible to the public. Furthermore, it has been argued that crowded spaces are generally considered to be ‘soft targets’ (Doelwitten in Beeld – Vijftien Jaar Jihadistische Aanslagen in Het Westen, 2019, p. 5) in the sense that the access to the space is not hindered by excessive or invasive security measures that alters its visitors’ experience (Coaffee, 2010, p. 943). This division splits an airport, for example, into a ‘soft target’ crowded space (the arrivals and departures halls) and a more secure ‘hard target’ (the gates), to which access is only granted after an invasive security check. The security dimensions of crowded spaces are twofold. Firstly, crowds are the target of attacks. Dutch General Intelligence and Security Sercive AIVD analysed the targets of jihadist attacks in the Western wold between 2004 and 2018, concluding that 30% of the attacks in that time frame took place in publically accessible spaces, even if the space itself was not the primary target (Doelwitten in Beeld – Vijftien Jaar Jihadistische Aanslagen in Het Westen, 2019, p. 14). Furthermore, the report concluded that the degree of success of attacks on public spaces is high at 83% (Doelwitten in Beeld – Vijftien Jaar Jihadistische Aanslagen in Het Westen, 2019, p. 15). Crowds have been targeted with (suicide) bombings, mass-shootings, and weaponised (driven) vehicles, sometimes referred to as vehicular terrorism. The AIVD classified this type of vehicular terrorism as a relatively new phenomenon, which has been used as a method by a wide range of actors of various ideologies, including in 2017 in Charlottesville, USA (Doelwitten in Beeld – Vijftien Jaar Jihadistische Aanslagen in Het Westen, 2019, p. 22), in Toronto, Canada in 2018, and outside Westminster Palace, the UK governmental seat, in 2017 and 2018. Urban planning has mitigated some of the risks to crowds from an outside attack. For example, during the height of The Troubles – the conflict in Northern-Ireland – a so-called ‘Ring of Steel’ was erected around the Belfast city centre (Coaffee, 2009, pp. 26–27), essentially erecting a security perimeter. Unfortunately that strategy led to the displacement of attacks rather than their prevention (Coaffee, 2009, pp. 24–25). A similar trend is noticeable when it comes to critical infrastructure; securing parts of critical infrastructure has led to a form of displacement that has paralleled other forms of crime displacement (Coaffee, 2009, pp. 61–63). The Brussels airport attack of 2016, for example, saw terrorists detonate explosives in the crowded part of the airport situated prior to security checks and freely accessible to the public.

Secondly, crowded spaces and crowds give a person a certain anonymity. From a privacy perspective this is a desirable situation in which people can go about their daily lives without the fear of constant surveillance. That same anonymity, however, makes a crowded space an ideal cover for criminals and terrorists like. A number of terrorist attacks, such as the 2019 Easter bombings in Sri Lanka, were launched by people walking within a crowd and pretending to belong before detonating explosives.

Technology has been used to secure crowded spaces for decades. The next sections briefly touch upon the use of technology in a preparatory phase of securing crowds (ante factum), while monitoring a crowded space (in situ), and after a significant security incident has taken place (post factum). It ought to be recognised that these phases are fluid and the stages can evolve rapidly. Finally, this article elaborates upon the more recent developments in this field, indicating the trajectory of the developments as well as some of the risks involved in the use of emerging technologies in the security field.

Ante Factum

Certain crowds can be predicted and therefore a response plan can be put in place ahead of time. Announced demonstrations, large sports events, or political rallies are all events for which a security response can be prepared. Assessing what type of crowd is likely to attend is of key importance to the planning the response.

A rather simplified distinction of types of crowds differentiates between peaceful, mixed, and hostile crowds. Peaceful crowds can include busy malls or rush hour transportation hubs. The fact that these crowds may have some criminals among them, such as pickpockets, does not negate the fact that the intent of the crowd is peaceful. Mixed crowds entail peaceful elements as well as more hostile ones. Sports fan crowds, for example, can consist of mixed crowds, as the majority of the crowd is usually peaceful, but some hard-core fans may be trying to engage the opposing team’s fans or law enforcement in a hostile manner. Lastly, certain crowds can be classified as hostile, which is a classification based on behaviour of the crowd, not on the nature of the cause the crowd supports or their intentions. The recent storming of the Kyrgyz parliamentary building by protestors (‘Kyrgyzstan Protesters Storm Parliamentary Building over Election Result’, 2020) contesting the election results is an example of this.

Prior to an event expected to draw a crowd, those responsible for safety and security will want to have as much situational awareness as possible. This is not just important from a security point of view but it also benefits the safety of a crowd. Insufficient crowd control has been the leading cause of deadly incidents, including at the 2010 Love Parade in Duisburg, Germany (Helbing & Mukerji, 2012, p. 16), and the 1989 Hillsborough Disaster (Scraton, 1999, p. 286) in the UK, despite the fact that these were not hostile crowds. Situational awareness can help identify routes for crowd control and potential choke points. It can also help create a plan for the removal of street furniture that runs a risk of being weaponized by hostile crowds.

Technology plays a vital role in establishing situational awareness ante factum. Geographic Information Systems (GIS), for example, can be used to create maps and can be used in conjunction with modelling software that can identify traffic flows and choke points (Hutson, 2008). Furthermore, if an event is identified as having an elevated risk of being hostile, for example because previous events of the same kind consisted of mixed or hostile crowds, camera images may be used to adequately prepare officers on the ground, and profiles can be made of previous instigators that can be used in situ (see below).

Social media analytics software, such as Dataminr (Dataminr’s Real-Time AI Platform, n.d.), may also be used in order to track prior interest in an event or alert responders an event is unfolding. Such software can then continue to be used for in situ tracking of developments.

In Situ

While events are underway or crowds have unexpectedly formed, there is often times some sort of monitoring of that crowd. Traditionally law enforcement would deploy officers on the ground and a helicopter to maintain situational awareness. Technological advancements, however, have made such in situ monitoring both easier and more sophisticated.

First of all, in a lot of cases the traditional helicopter has been replaced by an Unmanned Aerial Vehicle (UAV) (Treverton et al., 2011, p. 77), also colloquially known as a drone. Equipped with cameras, its purpose is the same as that of a helicopter; to provide a bird’s eye view of a certain area in order to monitor crowd flows, identify areas with crowd control issues, or areas where violent clashes have broken out.

CCTV (closed circuit television) is used for similar purposes, but developments in CCTV manufacturing over the past decades has made CCTV more mobile (as the example of the UAV illustrates), as well as smarter. First of all, in addition to the traditional static CCTV camera, the range of cameras that have PTZ (pan, tilt, zoom) capabilities has expanded, as has the resolution in which they record images. These CCTV cameras allow an operator to move them and zoom in on a car license plate, a (group of) person(s), or an item.

Furthermore, certain CCTV systems have been equipped with microphones so that they can alert control room operators in case certain noises are registered, such as aggressive or panicked screaming or gunshots (Audio Analytics for Professionals, n.d.). Controversially, CCTV equipped with live facial recognition capacity is also being trialled in some jurisdictions, for example in the UK. Proponents of the use of this technology have pointed towards its use in enforcing area bans and identifying those wanted by the police, as was the purpose for its experimental use in Cardiff, UK (Satariano, 2019). Opponents point towards the fact that the use of such technology violates human rights as it is not a necessary or proportionate tool in achieving its goals, an opinion that was affirmed in a court case filed against the South Wales Police for its use of automatic facial recognition (AFR)(Rees, 2020). Opponents also point towards the chilling effect that (perceived) constant surveillance has on people which can result in a degree of self-censorship (Fussey & Murray, 2019, pp. 36–38).

As mentioned, social media analytics, including keyword analysis can also be used in situ to monitor changes in the crowd’s behaviour. Social media accounts may announce counter-protests and can include veiled wording intended to instigate violence, as happened in Charlottesville, USA (Klein, 2019, pp. 302–303). Furthermore, social media tracking can alert first responders that an unpredictable situation is unfolding. Equally, social media is sometimes used to publish manifestos by people who are about to perpetrate heinous crimes, as Elliot Rodger did in 2014 prior to his killing spree in California (Withnall, 2014), or stream the crimes themselves, which the perpetrator of the Christchurch mosque shootings in New Zealand did (‘Christchurch Attacks: Facebook Curbs Live Feature’, 2019).

Post Factum

With the proliferation of cameras over the past decade, incidents that involve crowded spaces have a high chance of being captured on video, which may help first responders and investigators. The 2013 Boston Marathon Bombing, for example, saw the analysis of a huge amount of video footage obtained from the general public and CCTV to identify the perpetrators (Stroud, 2013). Law enforcement also uses social media to search for and ask the public for their images and videos of a certain event in order to help with an investigation, or to look for manifestos as described above.

The issue is the data overload such requests may cause. When riots occurred in Vancouver in 2011 after the Stanley Cup finals, law enforcement worked their way through 5000 hours of video footage received from the public (Stroud, 2013). To diminish the strain on personnel and work more quickly, video processing technologies have been introduced to overcome such data overload. The premise is that the software takes over the process of sifting through hours of footage to find an item (Bouma et al., 2017), an incident, or a person. One of the difficulties in using such analysis software, including facial recognition technology, is the fact that the very nature of the crowd means that a lot of the time faces will be obscured, either by moving people, or due to clothing for example in winter. Furthermore, a-pie (aging, posing, illumination and emotion) (Nakar & Greenbaum, 2017, p. 95) all influence the degree of certainty with which a system can identify a match. Another risk is that if the system does not identify anyone that matches the description, the conclusion might be drawn that the person wasn’t there, rather than that the system could not find the person.

After an incident, social media accounts will also be monitored. Investigators look at whether a manifesto has been posted, and if so, will usually ask the platform to remove it after they have obtained a copy for further investigation. However, social media accounts that have praised or shared the manifesto may also raise flags and become subject to further investigation, as recently happened in France (‘French Police Raid Dozens of Targets Suspected of Extremism after Teacher Beheaded’, 2020). Software used to map out relationships between such accounts or persons of interest is also increasingly used. Perhaps the most controversial company offering such software is Palantir, a company lauded by some because it was crucial in exposing drug cartels and terror networks, while at the same time vilified by others for being complicit in human rights violations, including the deportation of undocumented migrants (Sherman, 2020).

Lastly, scenario building technologies in combination with video footage can be used for learning purposes. In unpredictable, high-stress scenarios certain things may be missed that are obvious in hindsight. Extrapolating such lessons learned with the help of technologies that visualise different scenarios (Hutson, 2008) based on different decisions has a definitive place in learning organisations, can help the training of junior personnel, or in certain cases establish criminal negligence.

Risky Business

New technological developments are pushing the boundaries of capabilities every day. Particularly emerging technologies based on artificial intelligence (AI) and machine learning (ML) are becoming more prevalent in the security field. Some of these technologies are relying on AI computer vision, a notoriously difficult area of programming, to combat data overload by processing large volumes of images or video. Furthermore, real-time monitoring is also being enhanced; whereas older technology alerted an operator when movement was detected in a pre-selected parameter, companies such as AISight claim that the machine learning component of their technology severely reduces false positives (Cooper, 2014). In addition, automated target acquisition and retention programs are being developed, capable of fixating a camera on a specific target and visually following it, even if the camera itself is attached to a moving platform such as a UAV (Guo et al., 2017, p. 139).

Another field of rapid development is that of systems that claim to be able to detect aggressive or abnormal behaviour (Maguire & Fussey, 2016, pp. 34–35). This leads to the question, what is considered abnormal? Similar questions are raised by Maguire and Fussey (2016, p. 34) regarding AI/ML systems that claim to perform sentiment detection, sometimes referred to as emotion detection. The premise is that certain (micro)expressions are indicators of an experienced emotion which can be a prelude to hostile behaviour or malintent. The same premise underpins the work ‘spotters’ do; officers trained to recognize out of place behaviour. However, the question is whether a machine can contextualise a perceived emotion or behaviour. When a spotter at an airport identifies someone displaying signs of anxiousness while holding a dozen red roses, the context of an upcoming romantic encounter is likely to explain the emotion. The machine may flag that same anxious behaviour as suspicious. What also ought to be considered is the fact that many AI/ML systems have suffered from bias, which also touches upon the notion of fairness if false positives disproportionally affect a certain group, for example a racial minority (Courtland, 2018). From a human rights perspective this is hugely problematic, especially in a security context where in a high-pressure situation a wrongful identification may result in the arrest or even death of an innocent misidentified person, as the death of Jean Charles de Menezes tragically illustrates (‘Police Shot Brazilian Eight Times’, 2005).

A further but related point of contention is the manner in which certain technologies are deployed, or the data it is trained with. For example, consider a predictive policing context assisted by a machine learning tool that, based on historical data, predicts where to send officers on patrol. If the majority of data that is put into such a pattern recognition software program comes from reported crime, the model is very likely to predict that the areas most prone to crime are those where it is reported, even if much unreported crime happens elsewhere. In the world of computer science this mismatch between what a system is meant to do and what is does based on flawed data is often summarized as ‘garbage in, garbage out’.

Finally, it is worth underscoring that there are jurisdictions where emerging technologies are acquired and deployed without consideration for the task it is supposed to support. One study conducted in the US, for example, found that technology was acquired haphazardly and with little regard for the policing strategy it was supposed to underpin (Hendrix et al., 2019). It is unlikely that this does not also happen in other jurisdictions worldwide. Such haphazard acquisition, combined with a lack of training on bias in data, can lead to a number of assumptions which can affect both the efficiency of those tasked with securing crowds as well as the human rights of citizens in a negative manner. It is therefore of pivotal importance that new technologies are introduced only after careful scrutiny and consideration for the law of unintended consequences. Such scrutiny should on the one hand ensure that the technological tools are indeed effectively supporting a designated task and are effective in doing so, while on the other hand it needs to reflect an adequate assessment of whether its deployment is necessary and proportionate in terms of the impact it can have on human rights. This will most likely result in a precarious equilibrium that requires constant recalibration as emerging technologies are continuing to enter the marketplace in rapid succession. The solution is not to shy away from new technology in our pursuit for security, but to always question whether deploying new technology damages the very fundaments of our democratic society.

References

Audio Analytics for Professionals. (n.d.). Sound Intelligence. Retrieved 20 May 2020, from https://www.soundintel.com/

Bouma, H., de Boer, M. H. T., Kruithof, M. C., ter Haar, F. B., Fischer, N. M., Hagendoorn, L. K., Joosten, B., & Raaijmakers, S. A. (2017). Automatic analysis of online image data for law enforcement agencies by concept detection and instance search. In H. Bouma, F. Carlysle-Davies, R. J. Stokes, & Y. Yitzhaky (Eds.), Counterterrorism, Crime Fighting, Forensics, and Surveillance Technologies (p. 17). SPIE. https://doi.org/10.1117/12.2277970

Christchurch attacks: Facebook curbs Live feature. (2019, May 15). BBC News. https://www.bbc.com/news/technology-48276802

Coaffee, J., Moore, C., Fletcher, D., & Bosher, L. (2008). Resilient design for community safety and terror-resistant cities. Proceedings of the Institution of Civil Engineers - Municipal Engineer, 161(2), 103–110. https://doi.org/10.1680/muen.2008.161.2.103

Coaffee, Jon. (2009). Terrorism, risk and the global city: Towards urban resilience. Routledge.

Coaffee, Jon. (2010). Protecting vulnerable cities: The UK’s resilience response to defending everyday urban infrastructure. International Affairs, 86(4), 939–954. https://doi.org/10.1111/j.1468-2346.2010.00921.x

Cooper, P. (2014, April 16). Meet AISight: The scary CCTV network completely run by AI. ITProPortal. https://www.itproportal.com/2014/04/16/aisight-the-surveillance-network-completely-run-by-ai/

Courtland, R. (2018). Bias detectives: The researchers striving to make algorithms fair. Nature, 558(7710), 357–360. https://doi.org/10.1038/d41586-018-05469-3

Crowded Places: The Planning System and Counter-Terrorism (p. 18). (2012). UK Government Home Office. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/375208/Crowded_Places-Planning_System-Jan_2012.pdf

Dataminr’s Real-time AI Platform. (n.d.). Retrieved 8 October 2020, from https://www.dataminr.com/technology

Doelwitten in beeld – Vijftien jaar jihadistische aanslagen in het Westen. (2019). Algemene Inlichtingen en Veiligheidsdienst (AIVD).

French police raid dozens of targets suspected of extremism after teacher beheaded. (2020, October 19). France24. https://www.france24.com/en/europe/20201019-additional-police-operations-under-way-over-beheading-of-french-teacher

Fussey, P., & Murray, D. (2019). Independent Report on the London Metropolitan Police Service’s Trial of Live Facial Recognition Technology (The Human Rights, Big Data and Technology Project). University of Essex.

Guo, Q., Liang, Z., Xu, J., & Hu, J. (2017). A new UAV PTZ Controlling System with Target Localization. MATEC Web of Conferences, 139, 00180. https://doi.org/10.1051/matecconf/201713900180

Helbing, D., & Mukerji, P. (2012). Crowd disasters as systemic failures: Analysis of the Love Parade disaster. EPJ Data Science, 1(1). https://doi.org/10.1140/epjds7

Hendrix, J. A., Taniguchi, T., Strom, K. J., Aagaard, B., & Johnson, N. (2019). Strategic policing philosophy and the acquisition of technology: Findings from a nationally representative survey of law enforcement. Policing and Society, 29(6), 727–743. https://doi.org/10.1080/10439463.2017.1322966

Hutson, M. (2008, March 3). The Sim That Saves People from Each Other. Discover. https://www.discovermagazine.com/technology/the-sim-that-saves-people-from-each-other

Klein, A. (2019). From Twitter to Charlottesville: Analyzing the Fighting Words Between the Alt-Right and Antifa. International Journal of Communication, 13, 297–318.

Kyrgyzstan protesters storm parliamentary building over election result. (2020, June 10). The Guardian. https://www.theguardian.com/world/2020/oct/05/kyrgyzstan-election-120-taken-to-hospital-following-result-protest

Lu, J., Whyte, M., McCarthy, K., Aibara, D., Morison, C., & Webster, M. (2010). RIBA Guidance on designing for counter-terrorism (p. 13). Royal Institute for British Architects. http://www.continuityforum.org/sites/default/files/images/RIBAguidanceoncounterterrorism_0.pdf

Maguire, M., & Fussey, P. (2016). Sensing evil. Focaal, 2016(75), 31–44. https://doi.org/10.3167/fcl.2016.750103

Nakar, S., & Greenbaum, D. (2017). Now you See me: Now You Still Do: Facial Recognition Technology and the Growing Lack of Privacy. Boston University Journal of Science and Technology Law, 23(1), 88–122.

Németh, J. (2009). Defining a Public: The Management of Privately Owned Public Space. Urban Studies, 46(11), 2463–2490. https://doi.org/10.1177/0042098009342903

Police shot Brazilian eight times. (2005, July 25). The Guardian. https://www.theguardian.com/uk/2005/jul/25/july7.uksecurity5

Rees, J. (2020, November 8). Facial recognition use by South Wales Police ruled unlawful. BBC News. https://www.bbc.com/news/uk-wales-53734716#:~:text=The%20use%20of%20automatic%20facial,Bridges%2C%2037%2C%20from%20Cardiff.

Satariano, A. (2019, September 16). This Camera Is Testing Britain’s Acceptance Of Surveillance. The New York Times, 1.

Scraton, P. (1999). Policing with Contempt: The Degrading of Truth and Denial of Justice in the Aftermath of the Hillsborough Disaster. Journal of Law and Society, 26(3), 273–297. https://doi.org/10.1111/1467-6478.00126

Sherman, N. (2020, September 30). Palantir: The controversial data firm now worth £17bn. BBC News. https://www.bbc.com/news/business-54348456#:~:text=But%20while%20Palantir%20might%20like,raided%20and%20those%20arrested%20deported.

Stroud, M. (2013, April 16). In Boston bombing, flood of digital evidence is a blessing and a curse. The Verge. https://www.theverge.com/2013/4/16/4230820/in-boston-bombing-flood-of-digital-evidence-is-a-blessing-and-a-curse

Treverton, G. F., Wilke, E., & Lai, D. (2011). Moving toward the future of policing. RAND, National Security Research Division.

Whiting, A., & Nakos, G. (2008). Functional Density and its Impact on Retail Satisfaction in Cross-Cultural Contexts: Can Crowded Stores be a Good Thing for Retailers. International Business, 11.

Withnall, A. (2014, May 25). Elliot Rodger’s ‘manifesto’ and YouTube video describe plans for rampage in horrifying detail: ‘I’ll take great pleasure in slaughtering you all’. Independent. https://www.independent.co.uk/news/world/americas/elliot-rodger-s-manifesto-and-youtube-video-describe-plans-rampage-horrifying-detail-i-ll-take-great-pleasure-slaughtering-you-all-9432770.html

 
Date modified: