Ad

Thursday, May 2, 2024

International Student Conference Explores Intersection of Artificial Intelligence and Human Rights


On April 19, 2024, an international online student conference on the topic “Artificial Intelligence and Human Rights: Current Challenges and Future Prospects” was held at Kyiv National Economic University named after Vadym Hetman in collaboration with Bournemouth University, UK.

The conference was moderated by prominent co-chairs Dr. Olena Titova and Dr. Steph Allen. Using intellectual synergy, the co-chairs steered productive discussions.

Highlighting the current problems and opportunities of artificial intelligence in modern legal reality, the following reports were heard and discussed at the conference:

Topic 1. ‘The Objective, Yet Emotive UK Jury: Can AI Replace and Reform for The Better or Worse?’. Dr Max Lowenstein, Bournemouth University.

Dr Max Lowenstein discussed the role of AI in supporting neurodiversity and its implications in various sectors, including the criminal justice system and employment,  touching on the ongoing interest in the UK on how to treat neurodiverse and disabled clients within the justice system and highlighted the research on employability for neurodiverse individuals.

One central theme is whether AI can emulate human emotions and make decisions comparable to a judge or jury, noting that AI shows strengths in cognitive and analytical tasks, as depicted in a 2020 hospitality research graph. The graph suggested AI's potential to outperform humans in specific tasks and even handle employment decisions such as hiring or firing.

AI's capabilities in legal settings was discussed, such as sifting through evidence and meeting evidential standards of proof. Acknowledgement of AI's current use in digital forensics was outlined but questioned its ability to manage the subjectivity involved in assessing witness credibility.

A significant part of the discussion revolved around the moral implications of AI judging humans, with references to legal philosophers' views on the necessity of human judges, citing the cases of O.J. Simpson and Oscar Pistorius to illustrate the complexity of human emotion and discretion in legal judgments.

Consideration was given to AI's role in business, particularly in emotional intelligence and the ethical concerns related to data protection. Reflecting on three research articles, the studies explored the relationship between human developers and AI's capabilities, suggesting that as creators, humans influence AI's functionality and its potential legal personality.

In summary, Dr Lowenstein positioned the intersection of AI with human judgement, emotion, and ethics, questioning AI's potential to replace human discretion in professional and legal decisions, underscoring the complexity of AI's integration into society, calling for careful consideration of the human-AI relationship as technology progresses.

Topic 2. 'Impact of AI Technology use in Policing'. Ocean Bennett, Bournemouth University.

Ocean Bennett addressed the implications of using AI in policing, focusing on two themes: predictive policing and facial recognition in public spaces. Predictive policing uses AI to forecast where crimes might occur, guiding police deployment. Although aimed at crime prevention, concerns arise over the source data, often reflecting racial profiling, particularly against ethnic minorities. In the U.S., data shows ethnic minorities are disproportionately stopped and searched, skewing AI predictions and perpetuating discrimination, especially against Black and Hispanic communities.

Facial recognition technology's reliability was also questioned, especially its accuracy across different ethnicities, with black women aged 18-35 being most susceptible to misidentification. This has real consequences, as seen in the wrongful arrest of Porcha Woodruff, a misidentified pregnant woman who suffered distress and dehydration due to the incident.

While only 44% of ethnic minority groups in Great Britain trust the police, with a lower 37% among black respondents, the integration of AI in policing risks exacerbating these figures. Despite some AI developers like Microsoft and Google calling for regulations and initially refusing to sell technology for law enforcement, some retractions have occurred, with sales proceeding.

Ocean concluded with a call for a shared global framework to protect human rights against AI misuse in law enforcement, stressing that developer promises are insufficient without robust oversight and legal structures.

Topic 3. ‘The Sustainable Development of Artificial Intelligence’. Joseph McMullen, Bournemouth University.

Joe McMullen provided a critical examination of the environmental impact of artificial intelligence (AI), followed by a more optimistic view on its potential role in sustainability. Initially, Joe focused on the substantial environmental cost of AI development, including excessive consumption of resources such as energy for data storage, production of technology infrastructure, and electronic waste from discarded devices.

According to The International Energy Agency in 2022, data centers and cryptocurrency mining accounted for nearly 2% of global power demand, with projections that this could double or even triple by 2026. Joe highlighted that training AI systems, such as the precursor to the current iteration of chat models, is also resource intensive. For example, training the predecessor of GPT-3 emitted 502 metric tonnes of carbon dioxide. Moreover, the operational carbon footprint of these systems remains significant, contributing 8.4 tonnes of carbon dioxide annually.

Furthermore, water consumption for cooling data centres is immense, with Microsoft's usage increasing by 34% between 2021 and 2022 to nearly 1.7 billion gallons. The disposal of electronic waste, particularly lithium-ion batteries, poses serious environmental and human rights concerns due to the mining process and contamination risks.

However, Joe shifted to a positive outlook, referencing the "hype cycle" of AI, suggesting that the current stage of inflated expectations will eventually give way to more productive and sustainable applications. AI is being utilised in environmental efforts, such as monitoring ice melting, deforestation, and wildfires, predicting extreme weather, and improving waste management. This demonstrates AI's potential to contribute to environmental conservation and disaster management and that the U.S. climate envoy John Kerry's assertion that 50% of carbon reductions for net zero may come from yet-to-be-invented technologies inspires hope that AI could be instrumental in achieving these innovations. Despite the heavy environmental toll, Joe as an environmentalist, believes that AI could ultimately prove beneficial for the planet, optimising the capacity to address environmental challenges effectively.

Topic 4. ‘New EU AI Act: Capable of Setting the Tone Worldwide?’ Dr Lingling Wei, Bournemouth University.

Dr Wei discussed the European Union's new Artificial Intelligence Act (AI Act), a ground-breaking legislative framework aimed at regulating AI technologies. The AI Act is compared to innovations such as the steam engine and the atomic bomb, reflecting its potential to significantly alter society.

The AI Act includes provisions for the regulation of facial recognition technology, banning its use in public spaces with specific exceptions for law enforcement, such as locating missing persons or preventing serious threats, subject to strict conditions and oversight. Another focus is the management of deep fakes, mandating developers to disclose whether content is synthetically generated. Exceptions are provided for creative and artistic works, provided disclosures do not detract from the work's enjoyment.

Generative AI, such as the model used for ChatGPT, falls under the AI Act's scrutiny, especially regarding training on copyrighted materials. Prior debates on whether such training constitutes infringement are acknowledged but not conclusively resolved by the Act. Instead, the Act requires detailed content summaries and legal compliance without outright prohibiting the use of copyrighted works for AI training.

The AI Act also addresses the need for transparency in AI outputs, stipulating that deepfake-based systems' outputs must be identifiable as such, commonly through watermarking. The Act defines 'general purpose AI models' that require substantial data for training, triggering copyright concerns.

Importantly, the AI Act categorises AI systems based on risk levels, from minimal to unacceptable risks, outlining specific prohibitions and conditions for high-risk AI applications. It emphasises a regulatory model focusing on risk assessment rather than a traditional liability framework. The Act's reach extends beyond EU borders, applying to any company operating within the EU market or affecting EU citizens.

Finally, the AI Act establishes an Artificial Intelligence Board for enforcement, with a structure of fines based on severity. The fines can be a fixed amount or a percentage of global turnover, whichever is higher. Dr Wei suggests that the EU's approach could influence global AI governance, setting a trend for AI regulation worldwide although currently it may not yet be fit for global use.

Topic 5. ‘AI and the impact on low-income households’. Jasmine White, student, Bournemouth University.

Jasmine White addressed the socioeconomic implications of AI on employment, particularly affecting low-income individuals. People often require employment to afford basic necessities due to insufficient social support systems. This need is more acute for lone parents, individuals with illnesses or disabilities, and carers, including young carers who look after their parents. These groups may be forced into minimum wage jobs due to a lack of qualifications, which they cannot pursue because of time constraints or the inability to afford further education.

Jasmine raised concerns around AI's increasing implementation in the workforce posing a significant threat to low-wage positions, such as customer service roles that AI systems can now fulfil. As AI takes over these jobs, it reduces the job market size, leading to increased competition for fewer positions. This competition elevates the qualifications required for even entry-level jobs, placing a greater burden on those in low-income situations who cannot escape this cycle.

The discussion touched on human rights implications, citing Article 23 of the Universal Declaration of Human Rights, which stipulates the right to employment and protection against unemployment. However, employers may prefer AI for its cost-effectiveness and efficiency, as AI does not require salaries, benefits, or breaks.

Jasmine also questioned whether there should be an employer's moral responsibility to prioritise human workers despite the higher costs or whether the state should intervene. Monitoring the impact of AI on low-income families is crucial for implementing support measures to help them obtain better-paying jobs.

In summary, a significant challenge lies in the pace of legislative response to AI's rapid advancement. Legislation often lags behind technology, making it difficult to mitigate AI's adverse effects on vulnerable populations. Jasmine suggests that without proper laws and support, AI's growth could exacerbate hardships for those already struggling, adding barriers to improving their quality of life.

Topic 6. ‘Artificial Intelligence and Article 8: Respect for private and family life, home, and correspondence’. Dimitar Bogdanov, student, Bournemouth University.

Dimitar discussed the tension between mental health, privacy rights, and the advancement of artificial intelligence (AI) technology.

Highlighting Article 8 of the European Convention on Human Rights, enacted within the UK's Human Rights Act 1998, it safeguards an individual’s right to respect for private and family life. Private life encompasses a wide array of personal aspects, including sexuality, body, personal identity, relationships, and data protection, which was the main focus of this topic.

With the evolution of AI, the potential for breaches of Article 8 is significant. Human reliance on emotion over logic can lead to privacy vulnerabilities when interacting with AI, which, as a corporate product, exists primarily for profit. AI systems such as GPT encourage users to engage in conversation without sharing sensitive information. However, what is considered sensitive varies from user to user, potentially exposing them to privacy violations without full awareness.

The increase in loneliness, as studied by the Campaign to End Loneliness, showing 7.1% of the UK population affected, raises concerns about how people use AI for socialisation. As individuals turn to AI for companionship, they may inadvertently share personal information that could be used to profile them for advertising, posing a threat to their right to privacy.

Furthermore, with AI being free and widely accessible, there’s a saying that if the product is free, then you are the product. Users become unwitting contributors to data harvesting for corporate gain, which could escalate with AI’s growing ability to simulate emotions. This capability might strengthen emotional bonds with users, particularly those experiencing loneliness, increasing the risk of privacy infringement.

AI is already embedded in daily life through smart devices that listen for commands to improve functionality and serve personalised ads. This ubiquity prompts questions about the future of privacy.

In conclusion, Dimitar argued that while AI holds promise for societal benefits, such as environmental improvement, there must be strict regulations. Recent EU legislation attempts to set boundaries for data collection. Effective protection of users' rights under Article 8 in the modern era with advancing AI technology requires that such limits are observed by companies.

Topic 7. ‘Civic education in the condition of artificial intelligence development’. Dr Vira Haponenko, KNEU.

Dr Haponenko raised the topic of  Academic Integrity and AI.   She highlighted that Artificial Intelligence offers a platform that facilitates academic integrity and cultural understanding in the context of AI use. It emphasises the importance of adhering to research methodology standards and acceptable means of applying AI technology. By simplifying the search for primary information and enhancing precision in statistical and informational data analysis, AI potentially enables large-scale research and problem-solving across various domains.

However, the use of AI complicates the acquisition of valuable information due to the proliferation of fake content and biassed viewpoints, even among experienced researchers. The dissemination of misinformation, whether intentional or inadvertent, poses a significant challenge, especially in academic and societal contexts. AI-generated materials, distributed through social networks or other channels, may lack proper verification, leading to the spread of falsehoods.

In the academic community, there's a risk of deliberate dissemination of false information, which can be damaging to societal trust and credibility. This is particularly pertinent in the context of information warfare and propaganda, such as that seen in the war zones. Institutions of higher education play a crucial role in promoting civil, political, and legal awareness, and must therefore prioritise the development of democratic values and critical thinking skills among students and faculty.

Counteracting the influence of misinformation requires a multifaceted approach, including media literacy education, critical thinking training, and ethical considerations in research practices. Additionally, there is a need to monitor and combat propaganda narratives effectively, while promoting a culture of civic responsibility and ethical conduct among students, scientists, and educators.

Ultimately, Dr Haponenko argues for integrating AI responsibly into educational practices  that not only involves teaching its technical aspects but also instilling a deep appreciation for democratic principles and ethical behaviour. Education should empower individuals to critically evaluate information, engage in constructive discourse, and uphold the values of democracy and integrity.

Topic 8. ‘Exploring the Nexus of AI in Healthcare: Balancing Benefits and Risks for Patients’ Rights’. Anastasia Vasiakina, 2nd year student, KNEU.

Anastasia raised the topic of AI in the healthcare sector. The integration of artificial intelligence (AI) in healthcare presents a complex landscape of benefits and challenges. AI's ability to analyse vast patient data and predict outcomes holds promise for personalised treatment and improved healthcare access. However, understanding AI's mechanisms and implications is important. Patients need to understand the experimental nature of AI-driven treatments and the potential risks involved. Transparency policies are crucial to address concerns regarding data privacy and patient safety.

While AI offers efficiency and cost-effectiveness, it also raises ethical questions. Who bears responsibility for errors in AI-driven decisions? How do we address job displacement resulting from automation? The World Health Organization has outlined steps for AI implementation, but regulatory frameworks remain weak.

Despite these challenges, AI has the potential to revolutionise healthcare, especially in underserved regions. Digital systems can provide round-the-clock assistance, offering a lifeline to those in urgent need. However, concerns persist about data security and patient consent.

Anastasia suggested that some initial solutions may lie in empowering patients to control their health data and ensuring robust regulatory oversight. The responsible use of AI requires a commitment to ethical principles and human rights. By navigating these challenges with diligence and accountability, AI can enhance healthcare outcomes and quality of life for patients worldwide.

Topic 9. ‘Distinguishing artificial intelligence and human thinking’. Victoria Kushnir, Mykhailo Shuldiner, Yana Yaroshenko, 2nd year students, KNEU.

Mykhailo, Yana, and Victoria raised the topic of comparisons between an AI and people, and what it means to be human, identifying eight key themes. The comparison between artificial intelligence (AI) and the human brain reveals fundamental differences in flexibility, information processing, perception of context, adaptation to uncertainty, origin and development, learning methods, multitasking abilities, and consciousness.

1. Flexibility and Plasticity: The human brain's plasticity enables learning, memory, and adaptation far beyond current AI capabilities. The brain seamlessly integrates emotional and sensory aspects, fostering complex understanding, empathy, and creativity, qualities currently beyond AI's reach.

2. Information Processing: AI exhibits a clear division between data transfer and processing, optimising efficiency but limiting flexibility. In contrast, the human brain integrates information seamlessly with adaptable neural networks, enabling learning and adaptation across various contexts.

3. Perception of Context: Human intelligence adapts its understanding and actions based on context, crucial for social interaction. AI, while advancing in natural language processing and decision-making, still struggles with interpreting context flexibly.

4. Adaptation to Uncertainty: The human brain effectively operates in uncertain and dynamic environments, while AI relies on established rules and predictability, lacking the adaptability to handle unpredictable situations.

5. Origin and Development: AI is engineered for specific tasks, limiting its adaptability, whereas the human brain's evolutionary development fosters flexibility and adaptability to a wide range of challenges.

6. Learning Methods: The human brain learns from diverse experiences, observations, and emotional encounters, forming deep understanding and skills. AI depends on data and instructions, limiting its ability to generalise knowledge to new situations.

7. Multitasking Abilities: The human brain efficiently switches between tasks, from complex cognitive processes to automatic actions, a result of evolutionary development. AI, optimised for specific tasks, requires additional training or programming to perform new tasks effectively.

8. Consciousness and Self-Awareness: Unlike AI, the human brain possesses consciousness and self-awareness, shaping human identity, morality, and interactions with the world.

In conclusion, they argued that while AI offers transformative potential, it lacks the complexity and uniqueness of human intelligence. As society navigates the integration of AI into various aspects of life, it must address challenges and ensure responsible development. Human thinking and intelligence remain valuable, requiring mindful consideration amid rapid technological advancements.

Topic 10. ‘AI and the right to education: leveraging technology for inclusive and equitable learning opportunities’. Diana Sviderska, 1st year student, KNEU.

Equitable, diverse, and inclusive learning has been forefronted by educators for many years, sometimes with limited success. Diana explored the intersection of artificial intelligence (AI) and human rights, particularly focusing on its implications for education. She addressed key questions regarding AI's impact on education globally and proposes recommendations to maximise its benefits while ensuring ethical use and inclusivity.

One major concern is the amplification of AI's advertisements on the right to education worldwide, identifying challenges arising from AI's integration into education systems and its potential effects.

Teachers are encouraged to embrace new tools and adapt to students' evolving needs in a digital age. Incorporating online collaborative platforms and adopting universal design principles can enhance accessibility and cater to diverse learning styles and abilities. While AI offers advantages in streamlining tasks and personalising learning experiences, human educators remain essential for providing empathy, guidance, and critical thinking skills.

Diana underscored the importance of teacher training and professional development to effectively integrate AI tools into teaching practices while upholding educational principles and highlighted the collaborative potential of AI and human labour in education, emphasising the need for responsible use to ensure effective teaching and learning outcomes.

Overall, Diana argued that while AI holds promise for transforming education, further research and collaboration are necessary to address emerging challenges and ensure its responsible and equitable implementation. By exploring these issues and leveraging AI's potential, education can evolve to empower students in an increasingly digital world.

Topic 11. The future of AI and human rights: emerging trends, potential risks, and strategies for ensuring a human-centric approach’. Sofiia Rengevich, 2nd year student, KNEU.

The potential impact of artificial intelligence (AI) on fundamental human rights is vast, presenting both opportunities and challenges. While AI has the capacity to revolutionise various aspects of human life, including healthcare and access to services, Sofiia argued that it also raises concerns regarding individual privacy and discriminatory outcomes.

AI-powered systems have shown remarkable accuracy  in identification of human physical concerns in healthcare, leading to improved patient outcomes, and extending access to underserved populations. However, the collection and use of vast datasets raise questions about data privacy and potential misuse of personal information. There is a risk that AI algorithms trained on biassed data could perpetuate existing social biases or lead to discriminatory outcomes, similar to bias in other sectors such as loan applications in underserved communities.

International treaties such as the Universal Declaration of Human Rights provide guiding principles for the development and deployment of AI, emphasising privacy and non-discrimination. Responsible AI development requires collaboration among scientists, developers, and lawmakers to prioritise ethical data practices and ensure transparency in algorithmic decision-making.

Legal frameworks must address data protection and accountability throughout the research and development process. Promoting open dialogue and public awareness is crucial for building trust and ensuring that AI advancements serve the common good.

In conclusion, Sofiia notes that navigating the relationship between AI and human rights requires ongoing consideration and proactive solutions to mitigate risks and maximise benefits. By upholding fundamental principles and fostering collaboration, AI can contribute positively to society while safeguarding human rights.

Topic 12. ‘The future of AI: an expert evaluation’. Liaan Booysen –  Computer Science and high risk management, special guest, Argentina.

Liaan Booysen led the conference with a wry and light-hearted view of AI and why we should not fear it, focusing on the practical applications and potential pitfalls of artificial intelligence (AI) across various domains, emphasising its role as a tool rather than a replacement for human expertise. With  illustrations of personal experiences with AI, particularly in automating tasks related to research, language translation, and communication, the key points were brought to life.

Drawing parallels between AI and a horse, Liaan highlighted the need for responsible management and accountability in AI use. While AI can enhance efficiency and effectiveness, akin to riding a horse for faster travel, it requires proper care and guidance to avoid unintended consequences. The analogy underscored the importance of understanding and controlling AI's capabilities to prevent misuse and ensure positive outcomes.

Liaan also touched on the evolving relationship between humans and AI, acknowledging the inseparable integration of technology into daily life. Despite concerns about data privacy, bias, and misinformation, he advocated for embracing AI's potential while implementing safeguards and quality control measures.

Discussions on the potential impacts of AI on education, particularly the challenge of combating the "transient effect" where individuals seek shortcuts for learning. Instead of attempting to restrict AI usage, suggestions for embracing its complexity and leveraging specialised models for improved outcomes were made.

Privacy and security concerns, especially in sensitive areas such as healthcare, prompted consideration of specialised AI solutions hosted by trusted entities. Liaan concluded with a reflection on the inevitability of AI's influence on society, urging adaptation, accountability, and proactive measures to harness its benefits while mitigating risks.

The conference was characterised by sharp and problematic questions prompting deep and interesting answers on societal issues and considerations for law and policy makers for years to come.

Other notable contributions came from:

Assoc Professor Denys Lifintsev raised a set of interesting questions and provided some critical ideas for further discussion.

During the discussion, Professor Bill Bowring,  from Birbeck Law School and an experienced barrister, supplemented the content of the conference with an analysis of practical situations of AI particularly in the education sector reflecting on the challenges with existing software but with an eye of what could be improved in the future.

Associate Professor Volodymyr Machuskyy (Snr) focused on the need to develop a legal concept of artificial intelligence.

Ivann De Kock, a guest speaker from South Africa at the conference, highlighted the real-life challenges of artificial intelligence in his work against human trafficking. He emphasised how AI chatbots have been beneficial in reintegrating victims into society, particularly for those who have become non-verbal and generally distrustful of others. Additionally, he pointed out the delicate balance between using and misusing data collection through these tools.

The conference also acknowledges the organisation by Joseph McMullen at Bournemouth University for bringing together the students and academics for these ground-breaking conferences.

Technical support for the conference was provided by Volodymyr Machuskyi (Jnr) for whom the organisers are grateful for the close attention and detail.

In the final part of the conference, the results were summed up and points of contact for further scientific cooperation between the conference participants were outlined.

Acknowledgements:  Summaries of the speakers’ presentations were supported by OpenAI: ChatGPT4 (2024).

@bournemouthuni  #AI  #LawandAI  #AcademicIntegrity   #UNSDG4


 

No comments:

Post a Comment