Available online at www.sciencedirect.com Review Enabling people-centric climate action using human-in-the-loop artificial intelligence: a review Ramit Debnath1,2,3, Nataliya Tkachenko1,4 and Malay Bhattacharyya3 Climate action includes a variety of efforts to address climate change and its impacts. The achievement of collective agreement by the public to engage in climate actions presents complexity, as it is influenced by political, ideological, and economic factors and faces resistance from powerful industries. With the progression of digitalisation, large amounts of user-generated data are available, opening new pathways to understand human behaviour in relation to climate action using artificial intelligence (AI). Integrating human knowledge and perception into AI systems via human-in-the-loop (HITL) frameworks can improve contextualised decision-making while mitigating biases. This review explores how HITL design can support AI for climate action at both micro- and macro-scale, especially synthesising instances where HITL systems provide a pathway for ethical alignment, integrating diverse human perspectives to ensure that AI-driven climate solutions respect cultural and social values. Addresses 1 Collective Intelligence & Design Group, University of Cambridge, Cambridge CB2 1PZ, UK 2 Climate and Social Intelligence Lab, Caltech, Pasadena 91125, USA 3 Machine Intelligence Unit, Indian Statistical Institute, Kolkata 700108, India 4 AI Centre of Excellence, Lloyds Banking Group, London EC2V 7HN, UK Corresponding author: Debnath, Ramit (rd545@cam.ac.uk) @RamitDebnath (Debnath, Ramit) Current Opinion in Behavioral Sciences 2025, 61:101482 This review comes from a themed issue on Behavioral Science for Climate Change Edited by Madalina Oana Vlasceanu and Grace Lindsay For complete overview of the section, please refer to the article collection, “Behavioral Science for Climate Change (2025)” Available online 24 January 2025 Received: 14 October 2024; Revised: 17 December 2024; Accepted: 6 January 2025 https://doi.org/10.1016/j.cobeha.2025.101482 2352–1546/© 2025 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY license (http:// creativecommons.org/licenses/by/4.0/). Introduction Climate action refers to the range of efforts taken to combat climate change and its impacts [1,2•]. It usually consists of steps taken to reduce and prevent the release of greenhouse gas emissions into the atmosphere (miti- gation) and adjustments in ecological, social, or eco- nomic systems in response to actual or expected climatic change and its effects (adaptation) [3]. The science is clear that human activities have had a significant impact on the Earth and continue to accelerate climate change [3]. We urgently need mitigation and adaptation to prevent further climate-induced damage, which can be achieved through collective action and large-scale be- havioural changes that put people at the centre of cli- mate action [4,5]. Enabling a people-centred transition is difficult due to the complexities associated with factors that influence consensus on collective climate action. These factors can include diverse interests and priorities, political and ideological differences, unequal responsibility and im- pact, economic disruptions, the interests of powerful actors (such as the big oil and mining industries), social and behavioural barriers, and differences in short-term and long-term thinking [6,7]. There is an urgent need to unpack these complexities. Social scientists have used tools such as online polls, field surveys, and ethnographic and observation-based approaches to better understand the complex relationships that impact public agreement of climate actions. However, these tools tend to be re- source-intensive and often limiting in scale. There are also concerns about the generalisability of the results, even in well-designed and preregistered experimental surveys [4,8]. With digitalisation, that is, the ongoing digital transfor- mation of our society and economy, a large amount of user-generated data are available, which has begun to provide unique clues about human behaviour and is finding meaningful applications to understand and help collective action towards climate change [9••]. Because the scale of such user-generated data is millions of gigabytes, tools such as machine learning (ML) and deep learning (DL) have become indispensable in deriving meaningful patterns from these data sets. Currently, the core processes of an artificial intelligence (AI) system are defined by ML and DL approaches, which use sophis- ticated data analysis techniques to imbue machines with human intelligence traits such as learning, problem sol- ving, and prioritisation. These characteristics enable a ]]]] ]]]]]] www.sciencedirect.com Current Opinion in Behavioral Sciences 2025, 61:101482 http://www.sciencedirect.com/science/journal/23521546 mailto:rd545@cam.ac.uk https://twitter.com/@RamitDebnath https://www.sciencedirect.com/special-issue/10M9GG328WM https://doi.org/10.1016/j.cobeha.2025.101482 http://creativecommons.org/licenses/by/4.0/ http://creativecommons.org/licenses/by/4.0/ http://crossmark.crossref.org/dialog/?doi=10.1016/j.cobeha.2025.101482&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1016/j.cobeha.2025.101482&domain=pdf machine to solve complex problems by deductive rea- soning based on observed data, combining computation, models, and algorithms to make useful predictions or decisions [10,11]. The more we add human-like deductive capabilities to AI systems, the more important it becomes to add feedback cycles to correct biases in their decision- making. A human-in-the-loop (HITL) AI design strategy is increasingly considered a way to accomplish what neither humans nor machines can on their own; that is, when a machine is incapable of solving a pro- blem, people must intervene and provide the feedback loops for solving the problem [11–13]. An HITL ap- proach has been found to reduce biases and improve trustworthiness in AI systems, making them more likely to be used for decision-making tasks [10–12]. In this article, we use the term ‘human-in-the-loop’ (HITL) as a critical emerging technique in the field of AI and ML that actively involves humans in training, testing, and refining AI systems through active learning or reinforcement learning. The end goal of an HITL design is to align AI models with human preferences [11,14]. In doing so, we provide an overview of this ra- pidly evolving area of HITL AI, examining the current advances in HITL methods that aid decision-making in climate action. We structure the paper as follows. Section Human-in-the- loop in machine learning pipeline provides an overview of the micro- and macro-contexts of HITL in AI and ML. Section Human-in-the-loop considerations for climate action artificial intelligence presents how HITL is being con- sidered as a design framework in the climate action AI context. We present how HITL AI models are exploring behavioural insights for enabling climate action, and fi- nally, we present the challenges and emerging trend in human–AI collaboration for climate decision-making tasks. Human-in-the-loop in machine learning pipeline The current ML paradigm considers HITL design from a data lifecycle perspective, where ML algorithms re- quire humans to improve performance iteratively [15,16]. It emerged in the mid-1990s [17], but its pro- minence has grown significantly in the era of generative AI, where the complexity, unpredictability, and ethical implications of AI outputs demand a more interactive and iterative approach. In a traditional ML pipeline, humans play an important role, from data extraction, preprocessing, integration, cleaning, annotation, labelling, training, to inference (see Figure 1). Beyond these auditing and data architecture steps, there are other significant HITL steps in the ML production pipeline that include quality improvement, cost reduction, latency reduction, and active learning [16]. However, with the emergence of generative AI, humans are expected to play a more nuanced role in aligning AI models to their preferences that go beyond the typical data auditing steps, including interrelation and augmentation [18]. To date, such emerging config- urations and relationships between humans and AI have not been thoroughly studied [14•]. In the literature, the boundaries between HITL and general human–computer interaction (HCI) are often blurred. While general HCI encompasses all forms of interaction between humans and computers, HITL specifically refers to systems where human input actively shapes and enhances the AI’s learning process in a feedback loop. This involves techniques such as active learning [19], where humans label critical data points selected by the AI, or reinforcement learning with human feedback (RLHF), where human evaluations directly influence the decision-making of the AI system [20]. By distinguishing HITL from broader HCI con- cepts, it becomes clear that HITL represents a paradigm where human expertise and judgement are integral to Figure 1 Current Opinion in Behavioral Sciences A typical HITL ML pipeline. Adapted from Ref. [16]. 2 Behavioral Science for Climate Change www.sciencedirect.com Current Opinion in Behavioral Sciences 2025, 61:101482 improving and aligning AI systems with human goals and values. In particular, RHLF has emerged as a prominent HITL approach in which generative AI models, such as large language models, are trained and iteratively corrected using human input to perform tasks that align more closely with human preferences [13,20,21]. For example, in the Open AI ChatGPT’s RLHF process, a reward model is initially trained to mirror human preferences; subsequently, this reward model guides the training of other models through reinforcement learning. Even- tually, the model receives feedback on its outputs and modifies its behaviour in response [22]. Human-in-the-loop considerations for climate action artificial intelligence AI has emerged as a powerful tool for processing large data sets, optimising resource use, and improving pre- dictive accuracy in the context of climate science. There is a growing micro–macro AI use cases and model de- velopment paradigm in the climate AI community where micro-focus on algorithmic-scale innovations and inven- tions and the macro-focus on the environmental, society, and sustainability impacts. For example, new AI algo- rithms are used to improve climate and weather pre- dictions by mimicking the carbon cycle of the Earth [23]. Similarly, AI-led decision support can help reduce emissions by optimising electrical grids and traffic flow and improving the energy efficiency of household de- vices [24,25•]. However, there are limited examples of how AI is used for social decision-making related to climate action [26••]. Despite these advances, AI faces several limitations in climate action decision-making contexts. For example, a significant challenge lies in the inherent uncertainty and complexity of climate systems, which can make AI predictions unreliable without adequate human over- sight [27]. AI systems can be opaque, making it difficult for decision-makers to trust or understand the rationale behind AI-generated recommendations [10]. Experts argue that human intervention is essential to validate AI- driven climate action decisions and ensure their align- ment with real-world conditions [9,10]. By incorporating domain expertise, human operators can help con- textualise AI predictions, correct model biases, and identify data gaps. Moreover, human oversight is critical to interpret AI output in a way that is actionable for policymakers, ensuring that AI recommendations are not only scientifically sound but also socially, politically, and morally viable [28•]. Climate AI needs a fusion of micro/macro HITL design configurations that facilitate participatory climate action, where stakeholders, such as governments, businesses, and communities, can contribute to the development and refinement of AI models [10·]. Hence, expanding the scope of human intervention from primarily data auditing (see Figure 1) to a more substantive role of shaping the AI system’s judgement capabilities that are fair, accountable, transparent, and ethically aligned [10,12], which needs perspective and knowledge from a wide range of stakeholders, from domain experts to end users [29]. In Figure 2, we illustrate that a climate AI would need a diverse range of HITL design considera- tions, from microscale (data auditing and labelling tasks) to knowledge and perspective of the drivers of partici- patory climate action. Although there are no prominent examples of a perfect climate action AI, in Table 1, we illustrate some recent studies that have integrated domain experts as HITL, mostly in an active learning context, to support climate action decisions. HITL systems provide a pathway for ethical alignment, integrating diverse human perspectives to ensure that AI-driven climate solutions respect cultural and social values. This is particularly significant in participatory climate modelling, where stakeholders co-develop sce- narios, or in disaster response, where human operators refine AI-generated recommendations to account for local contexts [40,41]. HITL also facilitates crowd- sourced data improvement, involving communities in annotating and validating critical environmental data sets, thus reducing bias and improving model robustness [42]. Looking ahead, HITL frameworks could address gaps in AI-driven climate solutions by scaling to Figure 2 Current Opinion in Behavioral Sciences HITL considerations for climate action AI. Source: Authors. Human-in-the-loop climate action AI Debnath, Tkachenko and Bhattacharyya 3 www.sciencedirect.com Current Opinion in Behavioral Sciences 2025, 61:101482 incorporate diverse global participation, mitigating al- gorithmic bias through continuous oversight, and in- tegrating indigenous knowledge systems to create culturally informed models for sustainable climate ac- tion. This approach positions HITL AI not only as a tool for optimisation but also as a collaborative platform for inclusive, adaptive, and resilient climate solutions. From human-in-the-loop to human–artificial intelligence collaboration for behavioural insights: an emerging frontier An emerging frontier in the context of HITL AI is its use to discern more accurate and useful insights into behaviour trends and the factors influencing climate change decisions. Although the literature at this stage is unclear whether it is a true HITL system or humans working in collaboration with AI to develop behavioural interventions that are culturally sensitive and con- textually effective [10,26]. We synthesise some recent studies at this intersection in Table 2, which has a common goal of extracting behavioural insights for cli- mate action. As illustrated by the examples in Table 2, it is clear that human understanding plays a central role in the deri- vation of AI-based behaviour insights for climate action. The HITL design is effective on a macroscale, as shown in Figure 2, and generally leads to collaboration between humans and AI. The progression towards a wider hu- man–AI collaboration model is anticipated, given that HITL enhances the credibility of AI models. Conse- quently, the prevalence of human–AI collaboration is Table 1 Examples of HITL AI for certain climate action tasks. Domain Decision-making task Human intervention in AI models Ref Energy management Optimising energy grids for renewable integration and improving the resilience of energy systems Energy experts work alongside AI models to reduce prediction errors for energy demand and generation, adjust parameters to match real-time conditions, and respond to unexpected disruptions [30,31] Disaster management i) Wildfire emergency response and evacuation planning. ii) Flood management. i) Emergency response teams input real-time data on fire spread, wind direction, and terrain into AI models to refine predictions and develop more accurate evacuation routes. ii) Human experts adjust flood prediction models based on real-time data from rivers, weather conditions, and infrastructure vulnerabilities to reduce damage to lives and critical services [32–34] Ecological and environmental monitoring Wildlife species tracking and monitoring of soil quality Real-time human feedback using active learning allows AI models to respond dynamically to rapidly changing ground conditions [35,36] Climate policy Simulation of different policy scenarios such as carbon pricing and emission regulation and prediction of its impact on the environment and economy Human intervention ensures that these predictive results align with political realities, social values, and ethical considerations. Policy experts work with HITL systems to adjust model parameters, interpret results, and translate AI-led insights into actionable recommendations [37–39] Table 2 Instances of human–AI collaboration for generating behavioural insights for climate action. Domain Behavioural insights Ref Sustainable consumption AI learns from human purchase pattern and suggests viable and enhanced alternatives to consumers which can be more sustainable and climate friendly. [43] Sustainable transportation choices AI learns from humans to optimise travel demand and transit routes, improves urban infrastructure for active and sustainable transportation (like cycling and walking), designs incentives for rideshare approaches that can reduce emissions, and actively learns from public attitudes towards climate policies. [44] Policy compliance and evidence synthesis HITL and human–AI collaboration can be used to synthesise insights for climate-friendly policy design. [45] Behaviour change campaign Analysis of big data to determine which types of messages, incentives, or interventions are most likely to influence positive behavioural changes towards climate action. HITL AI systems can iteratively refine and customise behavioural change campaigns. [46,47] Countering climate misinformation Human–AI collaboration and HITL design can counter climate misinformation on a large scale by allowing fact checking with computational methods that can perform claim detection, evidence retrieval, truthfulness classification, and transparent reporting. [48•,4] 4 Behavioral Science for Climate Change www.sciencedirect.com Current Opinion in Behavioral Sciences 2025, 61:101482 expected to increase [49]. Similar insights can be drawn from a detailed meta-analysis on the usefulness of hu- man–AI combinations [50••]. Therefore, we foresee AI for climate action being designed not only with HITL but also with a collaborative framework to facilitate the generation of behavioural insights. As highlighted earlier, similar to HITL systems, any collaborative setup between humans and AI must ad- dress preexisting hurdles related to fairness, account- ability, trustworthiness, and ethics [10,51,52]. This issue is especially pertinent in the field of climate action–fo- cused behaviour insights, where AI models could per- petuate existing stereotypes or overlook variations in human behaviour between different cultural and socio- economic groups [53•]. To counter these risks, HITL systems should integrate a wide range of perspectives in AI model development. Engaging human experts from various cultural, social, and economic backgrounds is essential to refining AI models and interpreting their results. Moreover, maintaining transparency and ac- countability in HITL AI systems is vital to cultivating public trust and ensuring fairness and equity in AI-led interventions (see Ref. [10] for more details). Conclusion As the global climate crisis intensifies, the need for in- novative solutions that combine technological advance- ments with human judgement has become more pressing. HITL AI design represents a powerful ap- proach to addressing this challenge by integrating the strengths of AI with human expertise. By enabling col- laboration between AI’s data-driven models and human understanding, HITL AI can offer more accurate, con- textualised, and ethical solutions for a range of climate- related problems. Additionally, AI for climate action should not only be data centric but also human centric to enable the creation of public value. HITL as an AI design framework shows the potential to harness people’s behavioural insights to provide a deeper understanding of consumer behaviours, con- sumption choices, and policy compliance, allowing for targeted interventions to promote sustainable and low- carbon practices. Human input is essential in refining these AI models to account for cultural, psychological, and social dimensions and to enhance their relevance and impact. Simultaneously, AI becomes ethical, less biased, and more trustworthy for decision-making. As AI systems increasingly integrate human judgement throughout every aspect of their value chain, colla- boration between humans and AI is set to become more vital than ever. This not only opens up un- precedented opportunities but also lays the ground- work for crafting genuinely human-centric climate AI, where technological prowess and human insight merge to forge innovative and sustainable solutions. CRediT authorship contribution statement R.D.: Conceptualization, Writing – Original draft, re- view and editing, Supervision, Funding. N.T.: Conceptualization, Writing – Original draft, review & editing. M.B.: Writing – review & editing. Data Availability No data were used for the research described in the ar- ticle. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgements RD thanks the support of the Cambridge Humanities Research Grant, the UKRI Responsible AI Grant, the ai@cam AIDEAS grant, and Bill & Melinda Gates Foundation [OPP1144]. RD acknowledges the support from the Machine Intelligence Unit, Indian Statistical Institute Kolkata, for hosting him as a visiting faculty. References and recommended reading Papers of particular interest, published within the period of review, have been highlighted as: •• of special interest •• of outstanding interest 1. Fuso Nerini F, et al.: Connecting climate action with other sustainable development goals. Nat Sustain 2019, 2:674-680. 2. • Bouman T, Steg L, Perlaviciute G: From values to climate action. Curr Opin Psychol 2021, 42:102-107. This study provides a comprehensive review of what motivates in- dividuals to support and take climate action, particularly demonstrating that stronger biospheric values (caring about the environment) predict stronger engagement in climate action, although contextual barriers can inhibit their actions. 3. Lee H, et al.: Climate Change 2023: Synthesis Report. Contribution of Working Groups I, II and III to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change. The Australian National University; 2023. 4. Debnath R, et al.: Facilitating system-level behavioural climate action using computational social science. Nat Hum Behav 2023, 7:155-156. 5. Magistro B, et al.: Identifying American climate change free riders and motivating sustainable behavior. Sci Rep 2024, 14:6575. 6. Patterson JJ, et al.: Political feasibility of 1.5C societal transformations: the role of social justice. Curr Opin Environ Sustain 2018, 31:1-9. 7. Debnath R, et al.: Do fossil fuel firms reframe online climate and sustainability communication? A data-driven analysis. npj Clim Action 2023, 2:47. 8. R Meckin, M Elliot: Computational Social Science: A Thematic Review; 2021. 9. •• Creutzig F, et al.: Digitalization and the Anthropocene. Annu Rev Environ Resour 2022, 47:479-509. Human-in-the-loop climate action AI Debnath, Tkachenko and Bhattacharyya 5 www.sciencedirect.com Current Opinion in Behavioral Sciences 2025, 61:101482 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref1 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref1 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref2 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref2 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref3 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref3 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref3 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref3 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref4 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref4 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref4 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref5 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref5 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref5 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref6 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref6 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref6 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref7 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref7 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref7 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref8 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref8 This paper discusses wider impact of digitalization on the humanity and the environment, especially focusing on climate mitigation efforts and socio-political stability. 10. • Debnath R, et al.: Harnessing human and machine intelligence for planetary-level climate action. npj Clim Action 2023, 2:20. This study provides an overarching view of how to align goals of climate action with trustworthy and less-biased AI development, emphasizing that HITL design can be a key strategy to achieve the above alignment. 11. Mosqueira-Rey E, et al.: Human-in-the-loop machine learning: a state of the art. Artif Intell Rev 2023, 56:3005-3054. 12. Kumar S, et al.: Applications, challenges, and future directions of human-in-the-loop learning. IEEE Access 2024, 12:75735-75760, https://doi.org/10.1109/ACCESS.2024.3401547 13. Retzlaff CO, et al.: Human-in-the-loop reinforcement learning: a survey and position on requirements, challenges, and opportunities. J Artif Intell Res 2024, 79:359-415. 14. • Memarian B, Doleck T: Human-in-the-loop in artificial intelligence in education: a review and entity-relationship (ER) analysis. Comput Hum Behav Artif Hum 2024, 2:100053. This is a comprehensive synthesis of human-in-the-loop AI in the con- text of education. It provides a structural and pragmatic exploration of humans and AI in terms of the entities, relationships, and attributes. 15. Wu X, et al.: A survey of human-in-the-loop for machine learning. Future Gener Comput Syst 2022, 135:364-381. 16. Chai C, Li G: Human-in-the-loop techniques in machine learning. IEEE Data Eng Bull 2020, 43:37-52. 17. Dautenhahn K: The art of designing socially intelligent agents: science, fiction, and the human in the loop. Appl Artif Intell 1998, 12:573-617. 18. Grønsund T, Aanestad M: Augmenting the algorithm: emerging human-in-the-loop work configurations. J Strateg Inf Syst 2020, 29:101614. 19. Takezoe R, et al.: Deep active learning for computer vision: past and future. APSIPA Trans Signal Inf Process 2023, 12:1-38. 20. J Dai et al. : Safe RLHF: Safe Reinforcement Learning From Human Feedback; arXiv. preprint arXiv:2310.12773. 2023. 21. Franceschelli G, Musolesi M: Reinforcement learning for generative AI: state of the art, opportunities and open research challenges. J Artif Intell Res 2024, 79:417-446. 22. E Ferrara: Should ChatGPT Be Biased? Challenges and Risks of Bias in Large Language Models; arXiv Prepr arXiv:2304 03738 2023. 23. Schneider T, et al.: Harnessing AI and computing to advance climate modelling and prediction. Nat Clim Change 2023, 13:887-889. 24. Eyring V, et al.: Pushing the frontiers in climate modelling and analysis with machine learning. Nat Clim Change 2024, 14:1-13. 25. • Rolnick D, et al.: Tackling climate change with machine learning. ACM Comput Surv (CSUR) 2022, 55:1-96. This is a detailed and comprehensive survey of state-of-the-art appli- cations of ML and AI in climate change mitigation. This paper identifies high impact problems where existing gaps can be filled by ML, in col- laboration with other fields. 26. •• Beckage B, Moore FC, Lacasse K: Incorporating human behaviour into Earth system modelling. Nat Hum Behav 2022, 6:1493-1502. This paper presents a novel social climate modelling framework for re- presenting human behaviour that consists of cognition, contagion, and a behavioural response in climate models. 27. Materia S, et al.: Artificial intelligence for climate prediction of extremes: state of the art, challenges, and future perspectives. Wiley Interdiscip Rev Clim Change 2024, 15:e914. 28. • Linegar M, Kocielnik R, Alvarez RM: Large language models and political science. Front Political Sci 2023, 5:1257092. This is a review paper that discusses in detail the recent trend in use of large language models and generative AI in political and social sciences. It also highlights methodological challenges and opportunities and touches on contexts around fairness and bias. 29. Avin S, et al.: Filling gaps in trustworthy development of AI. Science 2021, 374:1327-1329. 30. Ahmad S, et al.: A review of microgrid energy management and control strategies. IEEE Access 2023, 11:21729-21757. 31. Chen L, Meng F, Zhang Y: Fast human-in-the-loop control for hvac systems via meta-learning and model-based offline reinforcement learning. IEEE Trans Sustain Comput 2023, 8:504-521. 32. Buchelt A, et al.: Exploring artificial intelligence for applications of drones in forest ecology and management. Ecol Manag 2024, 551:121530. 33. Senarath Y, et al.: Designing a human-centered AI tool for proactive incident detection using crowdsourced data sources to support emergency response. Digit Gov: Res Pract 2024, 5:1-19. 34. Comes T: AI for crisis decisions. Ethics Inf Technol 2024, 26:12. 35. Talaat FM: Crop yield prediction algorithm (CYPA) in precision agriculture based on IoT techniques and climate changes. Neural Comput Appl 2023, 35:17281-17292. 36. Bothmann L, et al.: Automated wildlife image classification: an active learning tool for ecological applications. Ecol Inform 2023, 77:102231. 37. Tian L, et al.: Investigating the asymmetric impact of artificial intelligence on renewable energy under climate policy uncertainty. Energy Econ 2024, 137:107809. 38. Cowls J, et al.: The AI gambit: leveraging artificial intelligence to combat climate change—opportunities, challenges, and recommendations. Ai Soc 2023, 38:1-25. 39. Benedikt, L et al. : Human-in-the-Loop AI in Government: A Case Study in Proceedings of the 25th International Conference on Intelligent User Interfaces; 2020. 488–497. 40. Jain H, et al.: AI-enabled strategies for climate change adaptation: protecting communities, infrastructure, and businesses from the impacts of climate change. Comput Urban Sci 2023, 3:25. 41. Cao L: AI and data science for smart emergency, crisis and disaster resilience. Int J Data Sci Anal 2023, 15:231-246. 42. Joly, A. et al. : Overview of Lifeclef 2023: Evaluation of AI Models for the Identification and Prediction of Birds, Plants, Snakes and Fungi in International Conference of the Cross-Language Evaluation Forum for European Languages; 2023, 416–439. 43. Khan S, et al.: Impact of artificial intelligent and industry 4.0 based products on consumer behaviour characteristics: a meta-analysis-based review. Sustain Oper Comput 2022, 3:218-225. 44. Lukic Vujadinovic V, et al.: AI-driven approach for enhancing sustainability in urban public transportation. Sustainability 2024, 16:7763. 45. Spillias S, et al.: Human-AI collaboration to identify literature for evidence synthesis. Cell Rep Sustain 2023,. 46. Lee S, Park Y, Park G: Using AI chatbots in climate change mitigation: a moderated serial mediation model. Behav Inf Technol 2024, 43:1-17. 47. Matz S, et al.: The potential of generative AI for personalized persuasion at scale. Sci Rep 2024, 14:4692. 48. • Spina D, et al.: Human-AI cooperation to tackle misinformation and polarization. Commun ACM 2023, 66:40-45. This study presents a new sociotechnical framework to tackle mis- information and polarization by presenting a productive interplay be- tween algorithm and people. 49. Afroogh S, et al.: Trust in AI: progress, challenges, and future directions. Humanit Soc Sci Commun 2024, 11:1-30. 6 Behavioral Science for Climate Change www.sciencedirect.com Current Opinion in Behavioral Sciences 2025, 61:101482 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref9 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref9 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref10 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref10 https://doi.org/10.1109/ACCESS.2024.3401547 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref12 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref12 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref12 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref13 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref13 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref13 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref14 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref14 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref15 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref15 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref16 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref16 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref16 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref17 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref17 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref17 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref18 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref18 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref19 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref19 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref19 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref20 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref20 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref20 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref21 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref21 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref22 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref22 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref23 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref23 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref23 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref24 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref24 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref24 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref25 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref25 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref26 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref26 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref27 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref27 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref28 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref28 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref28 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref28 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref29 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref29 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref29 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref30 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref30 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref30 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref31 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref32 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref32 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref32 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref33 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref33 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref33 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref34 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref34 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref34 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref35 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref35 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref35 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref36 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref36 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref36 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref36 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref37 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref37 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref38 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref38 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref38 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref38 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref39 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref39 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref39 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref40 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref40 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref41 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref41 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref41 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref42 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref42 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref43 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref43 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref44 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref44 50. •• Vaccaro M, Almaatouq A, Malone T: When combinations of humans and AI are useful: a systematic review and meta- analysis. Nat Hum Behav 2024, 8:2293-2303. This meta-analysis explores when combinations of humans and AI are better than either alone. The average performance of human–AI combi- nations was significantly worse than that of either human or AI alone, with performance losses in decision-making tasks and significantly greater gains in content creation tasks. Finally, authors show when humans outperformed AI alone, they found performance gains in the combination, but when AI outperformed humans alone, they found losses. 51. Vyhmeister E, et al.: A responsible AI framework: pipeline contextualisation. AI Ethics 2023, 3:175-197. 52. Coeckelbergh M, Sætra HS: Climate change and the political pathways of AI: the technocracy-democracy dilemma in light of artificial intelligence and human agency. Technol Soc 2023, 75:102406. 53. • Cohen IG, et al.: How AI can learn from the law: putting humans in the loop only on appeal. npj Digit Med 2023, 6:160. This paper is a comprehensive overview of how human expertise can be combined with ML/AI judgement by providing use cases from the clin- ical practices. This paper help readers to understand in a HITL AI design humans reviewer can add more nuanced clinical, moral, or legal rea- soning that acts as critical error correction check on the AI/ML. Human-in-the-loop climate action AI Debnath, Tkachenko and Bhattacharyya 7 www.sciencedirect.com Current Opinion in Behavioral Sciences 2025, 61:101482 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref45 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref45 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref45 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref46 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref46 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref47 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref47 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref47 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref47 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref48 http://refhub.elsevier.com/S2352-1546(25)00001-4/sbref48 Enabling people-centric climate action using human-in-the-loop artificial intelligence: a review Introduction Human-in-the-loop in machine learning pipeline Human-in-the-loop considerations for climate action artificial intelligence From human-in-the-loop to human–artificial intelligence collaboration for behavioural insights: an emerging frontier Conclusion CRediT authorship contribution statement Data Availability Declaration of Competing Interest Acknowledgements References and recommended reading