Nov. 2023 | A deep dive in Generative AI, activism & social entreprise
This month Red Social Innovation started a collaboration with Digital Humanitarian Network which recently published the report Generative AI for Humanitarians. Find below the interview with its co-author Nasim Motalebi. We also had a chat with Christian Vanizette, co-founder of makesense. He recently launched the app chilli enabling citizens to take daily action to stop major carbon emitters using the power of internet and social media.
🇪🇺 | On the 26th and 27th of October, Red Social Innovation team participated to the Social Innovation Forum in Brussels, organised by the European Competence Centre for Social Innovation, ESF+ Communities of Practice. Ruth Paserman, Director of the DG Employment, Social Affairs and Inclusion at the European Commission, and Neringa Poskute, Head of Social Innovations and Transnational Initiatives Division, presented the Social Innovation Match (SIM) platform aiming to promote the transfer and scaling-up of social innovation across Europe. More than 200 innovators coming from all over Europe gathered to share best practices and challenges.
As usual, find below the latest solutions published on Red Social Innovation coming from United Kingdom, Switzerland, Kenya, Uganda, Zambia and Mozambique.
🇬🇧 | The British Red Cross launched the social entreprise Leaps and Grounds. This coffee business aims to break the barriers to integration for women refugees in the UK. It provides speciality barista training, paid work experience and 1-2-1 employability mentorship to support women refugees into long term employment.
Learn more about Leaps and Ground.
🇰🇪 🇺🇬 | Digital Identities (DIGID) : the digital wallet for refugees
The Dignified Identity solution is aimed at creating digital wallets for refugees, asylum seekers, and internally displaced people to provide credentials and eligibility for humanitarian aid assistance in Cash and Continuity of Healthcare.
Learn more about DIGID.
🇨🇭 | How to reconnect people listening strangers in the street
Two chairs and a warm welcome. This is everything you need to reconnect people in a public space. Viens t’asseoir, je t’écoute is a citizen initiative launched on the river of the Léman lake in Geneva, Switzerland.
Learn more about Viens t’asseoir, je t’écoute.
🇩🇰🇰🇪🇲🇿🇿🇲 | Hiveonline, the start-up that facilitates access to microcredit for African farmers
The start-up Hiveonline links unbanked and remote communities of African smallholder farmers to financial services and trade opportunities to improve their livelihoods and financial stability. We partner with off-takers and banks to build an economic ecosystem for fair trade and access to formal credit for these communities.
Learn more about Hiveonline.
ACTIVISM & SOCIAL MEDIA
Christian Vanizette: “chilli app makes social networks useful”
In 2024 Facebook will be 20 years old. Beyond the fact that we can start having some hindsight, it is hard to say if social media have a positive impact on society. Fake news, anxiety, time management, mental health… social media still need to prove that they can improve the society we live in. Christian Vanizette, the co-founder of makesense.org and an Obama Foundation scholar at Columbia University, didn’t wait for Zuckerberg to change things. He launched chilli, an app that enables citizens to take daily action to stop major carbon emitters using the power of internet and social media. In recent months, this climate activist has been actively involved in campaigns against the East African Crude Oil Pipe Line (EACOP), mobilising a wide community online. Born and raised in Tahiti, French Polynesia, Christian Vanizette was the witness of this group of islands threatened by climate change. Today, via chilli, he allows all smartphone users to fight climate change at a click of a button. The goal of the app ? Acculturate, inform and make sure you use a smartphone beyond Candycrush and Tiktok.
What is chilli?
chilli is an iPhone app that enables you to take part in impactful actions to stop climate change.
How did the idea for this app come about?
When I went to the COP in Glasgow to take part in campaigns with activists like Camille Etienne, I understood that with her campaigns against new fossil fuel projects and in favour of renewable energies, activists could have the greatest impact in terms of emissions avoided. In fact, stopping a new oil pipeline project is like avoiding 10% of the annual emissions of a country like France over 30 years! That's huge! How can we get more people involved alongside them, in a simple, effective and everyday way? That's what we're doing with chilli.
What campaigns are currently running on the app?
On chilli today, you can take part in the campaign to stop the EACOP as well as the Rosebank campaign in the United Kingdom, or campaign for a law to limit plastic in Europe via the European Parliament. Every day, activists suggest simple actions that you can do to help them and that take less than 20 seconds!
Are empowerment and social networks compatible?
Yes, we make social networks useful.
What's the message behind chilli?
Power is ultimately in our hands! Nothing is lost, let's use it!
Discover & follow chilli app now!
GENERATIVE AI & HUMANITARIAN ACTION
Nasim Motelabi: “Generative AI tools are not meant to replace humans but to work with them”
Nasim Motalebi is the co-author of the report on Generative AI for Humanitarians recently published by the Digital Humanitarian Network. She holds a Ph.D. in Information Sciences and Technology and a Masters degree in Architecture from Penn State University. Expert in human-centered research and humanitarian informatics, her work pertains to addressing the challenges and opportunities of adopting technologies in the humanitarian development sector. She is especially critical of the geopolitical environments as well as global policies that impact the localization of digital solutions. This is evident in her research on forced migration response and refugee integration in Uganda, Kenya, Ecuador, Malaysia, and Indonesia.
In the past, Nasim has supported international organizations in developing digital strategies and policy frameworks for equitable solutions. She has worked with renowned organizations such as the United Nations OCHA and the German Institute of Development and Sustainability (IDOS).
In 2025, 30% of outbound marketing messages from large organisations will be synthetically generated. By 2026, Generative AI will automate 60% of the design effort for new websites and mobile apps. Over 100 million humans will engage robo-colleagues (synthetic virtual colleagues) to contribute to enterprise work. What is the final goal of generative AI technology?
Generative AI is a new powerful tool that can help its users increase productivity, generate more insight from data, and support creativity. Generative AI tools are not meant to replace humans but to work with them. The idea of AI as a new humanitarian is also flawed and should be really debunked.
As I see it, the final goal of Generative AI highly depends on the developers, users, and other stakeholders. So perhaps we should ask, “What are the problems that generative AI will help us resolve in the humanitarian sector?” Or “What are the problems it will not be able to address?” Generative AI tools can help us to enhance data quality and diversity, or they can enhance the efficiency and efficacy of conducting repetitive tasks. But they are NOT creative agents (merely probabilistic agents), nor are they the ultimate decision-makers.
Beyond the for-profit industries, generative AI also offers profound opportunities to the humanitarian workforce for emergency preparedness and response, improving access to information in emergencies, real-time situational awareness, creating a more efficient supply chain, efficient human resources, and supporting fundraising and advocacy. Which are the main dangers of this deep transformation?
The most critical element in crisis response is decision-making. Making informed decisions relies heavily on information access, planning, coordination communication, and situational awareness to name a few. Historically, AI tools have augmented the humanitarian workforce across the disaster lifecycle. But more critically, emergency response requires leadership, transparency, responsibility, and accountability. This is where we should account for the risks of AI-driven transformations. Overall, there are three critical risks to keep in mind:
1. First, is that AI tools are limited in the type and timeliness of data they are trained on, and the algorithms used. Therefore, the insight generated by AI tools can be heavily biased, unreliable, or incorrect which poses a great risk to emergency response. Nevertheless, in situations of crises, where there is limited manpower and time is critical, there will be a higher need to rely on AI-powered tools for support. This is a double-edged sword and really, with no easy answer.
2. Secondly, content and data generated by AI can be unreliable, invalid, and lack creative or human sensitivity. All such limitations can cause distrust or indifference toward generated data. This is especially important when working on advocacy or fundraising.
3. And third, AI tools are overhyped and anthropomorphized as ultimate decision-makers. AI tools do not understand social, cultural, or political circumstances. Therefore, they can only provide insight into pre-defined indicators or parameters. Ultimately, the humanitarian workforce is responsible for ensuring data is the best representative of the population and that strategies of action are appropriate for a given situation. The risk here is failing to develop a culture of responsible and ethical use of AI, from leadership to operational workforce within the humanitarian sector.
Can you describe one case study illustrating how generative AI substantially improved humanitarian action?
Generative AI, as we know it today, is in the early stages of trial and development. But there is evidence on how it can support the automation of labeling satellite imagery for disaster response. However one of the major benefits of Large Language Models is their ability to enhance access and analyses of qualitative or descriptive data. For a long time, qualitative data has been seen as difficult to process or use. Now we can see that such tools can enhance the accessibility and impact of text-based data in humanitarian response. Some organisations have already developed sandboxed environments to interact and chat with their text data in different languages.
Humanitarian actions follow strict rules and principles, such as do no harm or data protection policies... Is generative AI technology capable of taking these elements in account?
Currently, commercially available tools pose data protection risks and are subject to bias due to their unregulated development and the underlying social biases contained within the training data. For example, lack of data diversity in the training data breaches principles of diversity and inclusiveness. There is no doubt that Generative AI inputs and outputs will improve over time with enhanced feedback learning. Despite such projected improvements, the decision-making and supervision of generated outputs ultimately require a human(itarian)-in-the-loop. Only then we can ensure the principles of do no harm are upheld in decision-making.
This isn’t anything new either. Enforcing existing data and information policies as well as enhancing at-home capacities are some of the strategies for humanitarians moving forward. We must recognize that the risks associated with Generative AI are not entirely a technological problem, but rather a socio-political problem.
Choices, content and strategies dictated by Generative AI will have a strong impact in the real world. Who will have the legal responsibility of these actions?
The question of accountability and legal responsibility is a hard one. It is something that different entities would like to push onto the “all-knowing AI algorithm”. First off, I want to emphasize that Generative AI tools are NOT all-knowing agents and CANNOT simply “dictate” a strategy, content, or decision. The AI developers and users are ultimately responsible for how and why AI outputs are used and their real-world effects. This is why it is critical to develop legal guardrails when working with technology corporations. Organisations should also develop regulatory guidelines and accountability systems to define internal legal responsibilities. It is also very critical to ask whom we are protecting and to what end. The ultimate goal should be to protect and support the crises affected populations. So, developing balancing strategies among all such stakeholders is key.
Read the report Generative AI for Humanitarians.
It’s time to act!
Join the FACT IMPACT program | Launched by HEC business school & Sisley-d’Ornano Foundation, and supported by the Social Impact Measurement team of the French Red Cross, this programme aims to measure the impact of projects within your Red Cross, association or social entreprise.
In order to do so, a team of three students trained in impact assessment and supported by an expert mentor (providing 6 hours of tutoring per group) will evaluate the social impact of a specific program over the course of 6 weeks. At the end of the project, a detailed report will be presented to you, enriched with key data and concrete testimonials illustrating your social impact.
This program aims to:
Immerse students in the vast universe of the social and solidarity economy, offering them a unique discovery opportunity;
Raise awareness about crucial issues related to social impact evaluation, while equipping them with essential methodological tools to conduct such evaluations effectively;
Provide them with a concrete opportunity to put their knowledge into practice through fieldwork, alongside a social impact project holder.
Are you interested to join the program? Drop a line this week to Sandrine Bonin, Head of Social Impact Measurement of the French Red Cross: sandrine.bonin@croix-rouge.fr.
REDpreneur has re-launched | We are thrilled to announce that REDpreneur has secured funding for turning the pilot project into a multi-year program to support the development of business skills and growth of Social Businesses in core areas of the Red Cross and Red Crescent Movement. On 19th September REDpreneur has kicked-off the 2023 Online Training Academy Global welcoming 30 changemakers from the Red Cross & Red Crescent Movement, CSOs, and impact startups from 14 countries working on 13 business cases.
A week later, on the 26th of September, it launched the Online Training Academy Ukraine, welcoming 70 changemakers from Ukraine Red Cross Society coming from 18 regions of the Ukraine which are working on 20 different venture ideas aiming to unlock social business opportunities in this unique crisis context.
But that’s not all. End of January REDpreneur will be starting its Master Class:
a 3-month hybrid accelerator programme targeting RC/RC National Societies, local CSOs and Start-Ups. The Master Class takes financially viable and impactful business cases to pilot and funding readiness.
Are you a RC National Society, CSO or Start-Up tackling one or more of the five Global Challenges 1) Climate Change, 2) Crisis and Disasters, 3) Health, 4) Migration and identity, 5) Values, power, and inclusion through social entrepreneurship?
Then stay tuned, follow REDpreneur on linked-in and visit the website www.redpreneur.org to get more information!
Would you like to get in touch to start a collaboration, share a message or submit a solution on Red Social Innovation? Please contact: giulio.zucchini@croix-rouge.fr.
Thanks to Alice Piaggio for the illustration 🌈
Founders
Partners