by Charlie Ackland
If I were to ask who you would trust more to develop new AI technology, in wide-ranging areas such as border security, finance, and biometrics: researchers and developers of the AI or private corporations? What if I instead were to scrap the question altogether, and claim that the two are intertwined? In July 2024, working as a part of the King’s Undergraduate Research Fellowship (KURF), I analysed over 200 patents of various international private companies, namely IDEMIA, Thales, and Accenture, all of which hold European Union contracts for border control and migration management. Simply put, a patent is a legal document issued by a patent authority, to an individual or group giving exclusive rights for an invention. . Patents are accessible to the public, meaning that they are essentially available for scrutiny. However, this sense of transparency is tainted due to the rather complex and obscure nature of patent documents, which are technologically and legally inaccessible to the general public. This blog will summarise some of the primary discoveries and assess some of the socio-political impacts of such findings.
Problematisation
Problematisation is a focal element of these patents. This refers to the formula which each patent application must use to justify their large-scale technological inventions/methods/systems by defining one or more issues that must be solved (by using such new technologies). Arguably, the most interesting component of this ‘problematisation’ relates to the methods chosen by the companies to define issues. Apart from a small number of cases where there is an omittance of clear issues, some of the main ones are as follows:
Fixing the prior art
One of the most prominent methods of problematisation is through criticising the ‘insufficiency’ of prior art in order to promote new technologies. Subsequent solutions are focused on training data applications to build more sophisticated technology. This can be framed around the idea of updating insufficient, expensive, and complex prior art. This evokes a consistent justification for new art, claiming something of an ‘out with the old, in with the new’ mentality. Other patents fill in a ‘technological gap’. Existing insufficiency is either framed in terms of a ‘gap’ in current technology or a lack of technology entirely. These patents often stress the problems, either of prior technology or a technological gap’, in order to justify their methods. However, it can be difficult to understand the severity of these issues, as they are not always fully explained. They are often framed in technical terms that can be difficult for the general public to comprehend. Furthermore, tech development is often preventive rather than responsive, which seems an adept process but can be flawed if not enough evidence is present of there being an issue to prevent. For example, an Accenture patent outlines its defensive system as protecting machine learning models in production environments, which ‘may’ be subject to adversarial attack (though no evidence is shared of such attacks).
Fraud prevention
Whilst a Thales patent delineates the primary aims of biometric research as reducing erroneous recognitions or rejections and optimizing the algorithms, making their implementation easier, fraud prevention arguably should also be regarded as a clear primary aim of the biometric research contained within these patents, as the growth of biometric technology goes hand-in-hand with the strengthening and sophistication of fraudulent attempts to bypass such systems. Across the three companies, there is a focus on building technologies that aim to solve niche areas of fraud. It is the objective of many patents to strengthen security by proposing innovative changes to biometric systems, all intending to prevent the success of fraud, or the proliferation of new fraudulent systems, such as ‘morphing’, detailed in an IDEMIA patent. This is important, as it could be argued that companies will use fraud as an excuse to render a technology insufficient, allowing for the construction of novel technologies which alter the capabilities of the technology altogether. Moreover, how these patents are linking the problem to the solution is questionable. As we can see in another IDEMIA patent, a form of fraud is described as one individual hiding behind another to bypass a biometric recognition system. Apart from the improbability of this scenario in real-time, it is important to assess this issue alongside its proposed solution, which is to build an AI system to estimate the weight of a user with pressure sensors. Here, this tech solution proposes a more invasive and potentially discriminatory identification method that utilises weight sensors, as a response to a fraud mechanism that is unlikely to be prominent and can be solved in various other ways.
Human error
Interestingly, another prominent area of ‘problematisation’ is the capacity for human error. A Thales patent problematizes the ‘man-in-the-loop’ systems, vaguely using them to explain how ‘operational efficiency and manning requirements are critical.’ Whilst this patent upholds the idea of an invention which would provide multiple decision support tools to the operators, it arguably also attempts to hold on to the man-in-the-loop mechanism that many other systems are abandoning. Should this be perceived as a positive step? Other patents, such as a Thales one, rely on human inference as one of the issues in need of being solved through the growth of AI assistance systems. Alongside this, many of these patents focus on automating systems, through the use of AI technologies, to surpass the ‘burdensome’ limitations of human labour. Often, there is a lack of correlation between the problems and the desired technological solutions. Many patent applications seem to promote their intense and innovative inventions as a direct response to a specific, and often trivial, problem. Other times, there is a recurring theme of patents problematising the stress of human labour but failing to present adequate information of such failure. An Accenture patent illustrates this tendency, as it proposes a system using aerial cameras to capture image data of a physical area that would be difficult for a person to access on their own (with no sufficient evidence of human error in navigating certain border areas). This has many problems, one of them being that, although these patents explain how they intend for their technologies to be used, there are a lot of grey areas in which governments or companies could exploit them to be deployed in other contexts.
Bias
Bias has been prominent in public debates surrounding AI, and counteracting such biases marks one of the most important measures to address the safety of the technologies and the people affected by them. However, there is little explanation of bias and the efforts to counter bias in IDEMIA, Thales, and Accenture patents. Most biometric and AI technologies certainly have the potential to be susceptible to some form of bias. For instance, recognition systems, such as facial biometrics, will propose complex Machine Learning systems whilst neglecting any mention of the potential for bias. Whilst some technologies aim to prevent bias, this is usually translated in a technical manner, once again with no clear explanation of whom this bias could impact. Using Accenture as an example, among the 70 patents I have analysed, nearly everyone who is working to 'identify' or prevent bias only frames this bias in technical terms. For instance, this is usually through the lens of technological bias, with human fault entering AI, such as the case of transcription bias. Taking an Accenture patent as an example, there is a lack of consideration of the socio-technological impacts contained in such bias. Without knowing who is affected by these biases, there is no guarantee that reforming a system to make it more sophisticated will entirely prevent them, particularly where the risk to certain demographic group is at stake.
Semantics of innovation
The patents I have analysed demonstrate a notable use of semantics to make compelling cases for new technologies. By situating a technology as a protective system that aims to prevent an ‘attacker,’ usually malware or a fraudster, such systems are automatically framed as a necessary and positive ‘defence.’ This is especially of concern when considering how patents usually only frame these ‘attacks’ and/or ‘attackers’ as being in positions that could do damage, while offering little or no evidence of such damage. Such semantic binaries of 'attack' and 'defence' are possible methods of affective persuasion rather than evidence presentation.
The analysis of the three companies’ patents, whilst not exhaustive, offers a preliminary understanding of the methods of justification and obfuscation that are part of technological innovation.
Comments