Will AI take away your job?
Will AI take away your job? The answer is probably not. AI systems can be good predictive systems and be very good at pattern recognition. AI systems have a very repetitive approach to sets of data, which can be useful in certain circumstances. However, AI does make obvious mistakes. This is because AI does not have a sense of context. As Humans we have years of experience in the real world. We have vast amounts of contextual data stored in our brains that make it possible to predict and to know boundaries of the real world so that even if we have never been in a particular situation, we are still able to deal with it. The unknown situation is where AI will fall down. In engineering, AI is being developed to monitor and control certain systems, this could extend to Nuclear Power Plants for example. If we examine control systems for Nuclear Power Plants on vessels, the system would likely be programmed using the safety of the reactor as the main priority. Automation is crucial in engineering systems like this, as the likelihood and cost of human error is high and human reaction times are slow in comparison to the system. However, there is always the possibility of unforeseen external factors that may override plant safety which cannot be programmed. If we look at a plant onboard a Submarine, for example, we see such additional factors as ship safety. In one example, the power plant might breach safety parameters, in which case an automatic system may shut down the reactor. However, there might be a greater urgency such as a flood in another compartment or an attack upon the vessel, that would override shutting down the reactor.
Why interdisciplinary research in AI is so important, according to Jurassic Park
Why interdisciplinary research in AI is so important, according to Jurassic Park.“Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.” I think this quote resonates with us now more than ever, especially in the world of technological development. The writers of Jurassic Park were years ahead of their time with this powerful quote. As we build new technology, and we push on to see what can actually be achieved there is an undertone of sales to whatever we build. The end product must be sold somewhere and to someone. This can derail any good intentions, just as building a resort full of dinosaurs was sold as a fun attraction, we see later in the film that it became a resort of terror. In the field of AI we are certainly late to the party with ethics and regulation. Indeed, even existing modelling protocols have been, in many cases, circumvented or potentially ignored. This has led to a widening of the gap in the interdisciplinary field of AI. This is compounded by pop culture representations of AI and the Ethicist’s potential lack of knowledge surrounding technical progression in the field of AI. There are now seemingly two separate branches that ought to be in sync. The first branch is a group of Philosophers led by the Author of Superintelligence, Nick Bostrom, who believe that there is a singularity where AI takes over the world and starts to kill off humans. The second branch is the Technical Cohort who are led by companies such as Google and Deepmind. The remit of these developers is to see what can be developed and produced. Ultimately a separate sales team will determine what products can be sold …
Black Mirror is already here. Should we be afraid?
The dystopian tale has a special place in our shared cultural heritage. Many of us will have a favourite, or perhaps several. I myself adored the 1984 and The Handmaid’s Tale books as a youngster; moved on to JG Ballard then discovered Philip K. Dick thanks to Minority Report; and in recent years was floored by Black Mirror episodes and videogames such as The Last of Us. The thrill can be explained by one question:‘What if this horror was actually happening?’“People say Black Mirror and The Handmaid’s Tale are conspiracies, science fiction–but as a philosopher, I can see a lot of the elements in these films and books that are actually happening now,”
How Humans can interact Positively with Technology
Digital technology has advanced more rapidly than ever before in the last 20 years. However, people may not be likely to understand the risks of this technology. This is due to developers not being transparent about functionality. This development has also far outpaced educational provision in schools, leading to a world in which we are encouraged to engage with digital technology, but we don’t quite understand it, how it works or what it does; therefore, if our data is harvested and sold it is much more difficult to understand when, why and how this is happening.
The gap between AI practitioners and ethics is widening-it doesn't need to be this way
The application of AI technologies to social issues and the need for new regulatory frameworks is a major global issue. Drawing on a recent survey of practitioner attitudes towards regulation, Marie Oldfield discusses the challenges of implementing ethical standards at the outset of model designs and how a new Institute for Science and Technology training and accreditation scheme may help standardise and address these issues.
Code Dependent : A book review
In Code Dependent, Madhumita Murgia considers the impact of AI, and technology more broadly, on marginalised groups. Though its case studies are compelling, Marie Oldfield finds the book lacking in rigorous analysis and a clear methodology, inhibiting its ability to grapple with the concerns around technology it raises. Madhumita Murgia spoke at an LSE event, What it means to be human in a world changed by AI, in March 2024 – watch it back on YouTube. Code Dependent: Living in the Shadow of AI. Madhumita Murgia. Picador. 2024.
Unequal Sample Sizes and the use of larger control groups
To date researchers planning experiments have always lived by the mantra that’using equal sample sizes gives the best results’ and although unequal groups are also used in experimentation, it is not the preferred method of many and indeed actively discouraged in literature. However, during live study planning there are other considerations that we must take into account such as availability of study participants, statistical power and, indeed, the cost of the study. These can all make allocating equal sample sizes difficult, and sometimes near impossible. This, some might say, means that the study would not adhere to rigorous statistical standard (Rosenbaum and Rubin, 1985). However, here we present evidence that, not only is this a false assumption, but that we may actually gain more power in the study by actually using unequal groups. Here, data from a Sepsis Biomarker study is used, in which the aim is to predict, by biomarker level and presence, whether the patient would go on to develop sepsis. It was found that larger control groups may give more power to studies looking for an effect in the mid range but not for large or small effects. This study shows merit in the hypothesis that more power can be achieved when a larger control group is used. Published by the Ministry of Defence at dstl-Reference DSTLTR92592 P2PP2R-2016-02-23T13: 39: 45
Women in Tech : Challenges and Recommendations
When only women turn up to a panel on challenges for women in technology, how do we then reach out to industry, academia and government to encourage them to listen to the current challenges experienced by women in tech. Technology is rapidly changing and we are seeing women disadvantaged by less training opportunities, lack of role models, perceived penalties for taking time off to have children or discharge caring responsibilities as well as the risk that their jobs are subject to more automation. Multiple workshops at the Institute of Science and Technology highlighted significant challenges for women in tech, the data from our empirical study illustrates these challenges in detail. With the workplace still male dominated and the landscape changing rapidly, women have a significant role to play and we need to ensure that role is not only facilitated but the existing challenges are mitigated. This is a discussion paper with empirical data that illustrates challenges currently experienced by women in tech and how we can move forward to ensure not only equal opportunity but remove some of the challenges currently experienced. In this paper we have not considered the same impact on men who take career breaks for reasons of caring responsibilities.
Analytical Modelling and UK Government Policy
In the last decade, the UK Government has attempted to implement improved processes and procedures in modelling and analysis in response to the Laidlaw report of 2012 and the Macpherson review of 2013. The Laidlaw report was commissioned after failings during the Intercity West Coast Rail (ICWC) Franchise procurement exercise by the Department for Transport (DfT) that led to a legal challenge of the analytical models used within the exercise. The Macpherson review looked into the quality assurance of Government analytical models in the context of the experience with the Intercity West Coast franchise competition. This paper examines what progress has been made in the 8 years since the Laidlaw report in model building and best practise in government and proposes several recommendations for ways forward. This paper also discusses the Lords Science and Technology Committees of June 2020 that analysed the failings in the modelling of COVID. Despite going on to influence policy, many of the same issues raised within the Laidlaw and Macpherson Reports were also present in the Lords Science and Technology Committee enquiry. We examine the technical and organisational challenges to progress in this area and make recommendations for a way forward.
Ethical funding for trustworthy AI: proposals to address the responsibilities of funders to ensure that projects adhere to trustworthy AI practice
AI systems that demonstrate significant bias or lower than claimed accuracy, and resulting in individual and societal harms, continue to be reported. Such reports beg the question as to why such systems continue to be funded, developed and deployed despite the many published ethical AI principles. This paper focusses on the funding processes for AI research grants which we have identified as a gap in the current range of ethical AI solutions such as AI procurement guidelines, AI impact assessments and AI audit frameworks. We highlight the responsibilities of funding bodies to ensure investment is channelled towards trustworthy and safe AI systems and provides case studies as to how other ethical funding principles are managed. We offer a first sight of two proposals for funding bodies to consider regarding procedures they can employ. The first proposal is for the inclusion of a Trustworthy AI Statement’ section in the grant application form and offers an example of the associated guidance. The second proposal outlines the wider management requirements of a funding body for the ethical review and monitoring of funded projects to ensure adherence to the proposed ethical strategies in the applicants Trustworthy AI Statement. The anticipated outcome for such proposals being employed would be to create a ‘stop and think’ section during the project planning and application procedure requiring applicants to implement the methods for the ethically aligned design of AI. In essence it asks funders to send the message “if you want the money, then build trustworthy AI!”.
Dehumanisation and AI :Published in CADE 2023 by IET
Click here link to read.
AI is becoming more widespread than ever as we offload decision making to algorithms. Recently we have seen many legal challenges against algorithm powered decision such as discrimination relating to gender and race and incorrect benefits allocation. A recent observable issue around the implementation of AI is the issue of dehumanisation. Dehumanisation is the human reaction to overused anthropomorphism and lack of social contact caused by excessive interaction with technology. This can lead humans to devalue technology, but also then to begin to devalue other humans. Resulting discrimination towards perceived outgroups causes division within society in the on and offline worlds. The potential exploitation that can be achieved by the manipulation of human belief systems leading to dehumanisation is substantial. This is a contradiction of the growing popularity of AI, as the negative effects of unchecked and poorly understood technology could certainly outweigh any perceived positive effects of its use. It is clear to see that, due to lack of testing and modelling forethought, we are entering unchartered territory that holds a vast array of consequences, some that we are yet to observe.
The Future of Condition Based Monitoring: Risks of Operator Removal on Complex Platforms
Click this link to read.
Complex platforms are very difficult to manage and maintain. This is why we see teams of engineers, many highly specialised, that carry out this role for industries such as aerospace, nuclear and subsurface. It is a critical undertaking to maintain the aforementioned systems, which often have components at varying degrees of degradation. To maintain complex systems, Condition Based Monitoring (CBM) (a type of predictive maintenance that uses sensors to measure the status of an asset over time while it is in operation) is most frequently used. Artificial Intelligence(AI) models that have been developed in the area of CBM are currently not well explained, nor well understood by users or operators. When AI is brought into a complex system we observe varying degrees of success.The level of success rests on the complexity of the system, the training and understanding of the end operator as well as the maintenance processes around the system. Implementing AI or complex algorithms into a platform can mean that the Operators control over the system is diminished or removed altogether. For example, in the Boeing 737 Air MaxDisaster, AI had been added to a platform and removed the operators’control of the system. This meant that the operator could not then move outside the extremely reserved, algorithm defined, ‘envelope’ of operation, leading to loss of life. Therefore, the implementation of AI leading to any removal of operator system management in complex systems, especially related to aerospace and subsurface industries, has to be considered carefully. In this paper we analyse the risks of removing operator system control and implementing algorithms, or AI, in complex systems.
Technical challenges & Perception: Does AI have a PR Issue?
Click here link to read.
From collecting robust data, to modelling the real world and interpreting output, modelling is a complex undertaking. Increasingly, models have been highlighted that not only disadvantage society but those whom the model was originally designed to benefit. An increasing number of legal challenges around the world illustrate this. A surge of recent work has focussed on the technicalbut not necessarily the real-world challenges for practitioners. Through two studies we conduct an investigation into perceptionand real-world needs within industry. In study one we re-run the 2019 survey by Holstein et al. to determine differences betweenpractitioner challenges in the UK and USA and we analyse any advancements apparent since the 2019 study. In study two we examinethe perception of users and practitioners towards AI. This study helps to unlock interdisciplinary reasons behind existing challenges.Based on these findings we highlight directions for future research in this area
Towards Pedagogy supporting Ethics in Analysis
Click this link to read.
Over the past few years there have been an increasing number of legal proceedings related to inappropriately implemented technology. At the same time career paths have diverged from the foundation of statistics out to data science, machine learning and AI. All of these being fundamentally branches of statistics and mathematics. This has meant that formal educational training has struggled to keep up with what is required in the plethora of new roles. Mathematics as a taught subject is still based in decades old teaching specifications and has not been updated centrally in the UK as a curriculum to include new technologies, coding or ethics. The disciplines involved in technology, mathematics and related subjects are firmly split between ICT (Information and Computer Technology) and mathematics in secondary school, continuing on to be split between computer science and mathematics at university. As we continue to develop technology, we see these academic fields becoming increasingly intertwined.
This paper proposes that existing education for concepts such as ethics and societal responsibility that are critical in building robust and applicable models do currently exist in isolation but have not been incorporated into the mainstream curriculum of School or University. This is partially due to the split between fields in an educational setting but also the speed with which education is able to keep up with industry and its requirements. Principles and frameworks of socially responsible modelling beginning at school level would mean that ethics and real-life modelling are introduced much earlier than is currently done. Integrating these concepts with philosophical principles of society and ethics would ensure suitable foundations for future modellers and users of technology to build upon.
Anthropomorphism and its impact on the Perception and Implementation of AI
Click this link to buy the book.
Anthropomorphism is a technique used by humans to make sense of their surroundings. Anthropomorphism is a widely used technique used to influence consumers to purchase goods or services. These techniques can entice consumers into buying something to fulfil a gap or desire in their life, ranging from loneliness to the desire to be exclusive. By manipulating belief systems, consumer behaviour can be exploited. This paper examines a series of studies to show how anthropomorphism can be used as a basis for exploitation. The first set of studies in this paper examine how anthropomorphism is used in marketing and the effects on humans engaging with this technique. The second set of studies examines how humans can be potentially exploited by artificial agents. We then discuss the consequences of this type of activity within the context of dehumanisation. This research has found potential serious consequences for society and humanity which indicate an urgent need for further research in this area.
The economic case for getting asylum decisions right the first time
Click here to Read The Article
Click here to read the media coverage in the Independent
Research with Pro Bono Economics and the Refugee Survival Trust
Over half the total applications for asylum the UK receives each year are initially rejected, yet nearly a third of these initial rejections are subsequently overturned on appeal. This system that fails to get decisions right first time imposes significant costs, not just on the applicants themselves, but also more widely on UK taxpayers.
The taxpayer and Treasury bear the costs of this system failure in a number of different ways. Directly, resource is wasted within the courts and the legal aid system. The more protracted the process, the longer the Home Office must fulfil its obligations to provide accommodation and subsistence to asylum seekers at risk of destitution. There are also additional administrative costs to the Home Office: we estimate the cost of incorrect initial decisions adds up to £4 million per year.
The NHS must also manage the knock-on impacts of incorrect initial asylum decisions. More than 61% of asylum seekers and refugees experience serious mental distress including higher rates of depression, post-traumatic stress disorder and other anxiety disorders, and being refused asylum is the strongest predictor of depression and anxiety within asylum seekers.
In addition, the longer the appeals process drags on, the greater the opportunity costs for the UK economy. With the majority of asylum seekers banned from working, the Exchequer misses out on significant tax receipts. While refugees are stuck in a position of unemployment, their skills can become eroded: only 15% of refugees find employment in the UK of a similar status to that they had held in their country of origin. That has long-term impacts for the economy, with asylum seekers earning and working less than UK nationals and economic migrants.
At a time of real pressure both on Public Sector departmental budgets and NHS services, and when businesses are struggling to fill skills gaps, these costs cannot be dismissed. Nor can the potential benefits of refugees’ skills and experience be underestimated.
Reducing the number of incorrect initial decisions on asylum applications would require tackling a number of challenges that exist within the system, from the training of Home Office staff to the consistent provision of competent translators. Our research indicates that the support provided to asylum seekers during their application process may play a key role in affecting the outcomes of their applications.
The environment in which many people apply for asylum in the UK is an incredibly unstable one. Often arriving in the UK with very few resources, facing great uncertainty about their future and forbidden from working, many asylum seekers are reliant on the state and charities to survive and meet their essential needs, from bus passes to food. Only a very limited support system is provided by the government, and many individuals and families find themselves in precarious financial positions in addition to coping with the substantial trauma of the circumstances which forced them to flee home. This backdrop can impact the ability of asylum seekers to represent and advocate for themselves during the asylum process.
This is backed by evidence that suggests that the most vulnerable groups of asylum seekers are consistently more likely to have their appeals upheld by the courts. That includes women who have been more likely to succeed on their appeals every year for the last decade aside from 2015. There is also a marked difference in success rates between nationalities, with asylum seekers from nations experiencing extreme violence – such as Afghanistan, Sudan, Yemen and Libya – twice as likely to be successful at appeal than those from more less overtly violent nations. Coming to the UK having experienced significant trauma and with few resources, these groups are precisely those who need the most support from the asylum system.
Given this, investment in forms of support for asylum seekers which help create a more stable environment in which to go through the asylum process could help not only cut down on the costs of incorrect initial decisions but also on other potentially greater costs for the taxpayer. Charities which provide services such as help to access childcare, education, integration, transportation, essential goods, and accommodation to asylum seekers play an essential role in helping to ensure asylum applications are right first time by contributing to a more stable environment in which to apply.
Published Research with Scoliosis SOS at SOSORT 2022
Click this link to read.
Our extensive clinical and statistical work with Scoliosis SOS has resulted in 4 abstracts being presented at this year’s SOSORT Conference in San Sebastian. We are very proud to work with Scoliosis SOS and to be able to help those with Scoliosis have an improved quality of life.
- Exploring the effectiveness of an intensive treatment of Physiotherapy
Scoliosis Specific Exercise (PSSE) on improving Thoracic Apical Spine
Deviation in patients with Idiopathic Scoliosis (IS) using a Formetric
Scanner - The relationship between the Angle of Trunk Rotation and Forced Vital
Capacity in patients with Idiopathic Scoliosis after a Four-Week Intensive
Physiotherapy Scoliosis Specific Exercise Programme, an update on the
SOSORT Award Winner, 2017 - Exploring the effectiveness of an intensive treatment of Physiotherapy
Scoliosis Specific Exercise (PSSE) on improving Pelvic Obliquity (PO) in
patients with Idiopathic Scoliosis (IS). - Exploring the effectiveness of online Physiotherapy Scoliosis Specific
Exercise (PSSE) in improving health related quality of life (HRQoL) in
patients with Idiopathic Scoliosis (IS): A comparison between online and
face-to-face (F2F) treatment’