Charities: Is AI Really the Problem?

We’re getting worried about the use of AI for fundraising. But beyond the practicalities, it feeds into our deeper worries about technology in the helping professions….

I did expect a bit of grumpiness when we started to talk about AI in the VCS and wider helping professions. I’m battle-scarred. In my experience, there tends to be a resistance to new technology in these sectors. I get it. Some of us feel it takes us away from people. We can’t afford it. We don’t have time to train people (or the interest). The private sector drives it and can be rather patronising - and predatory. It’s often expensive - or presented to us at high price. There are many good reasons for us to refuse it. But my feeling is that many reasons aren’t really about the technology at all. 

AI in Fundraising

AI is getting some attention now in the VCS, as a couple of new developers are trying to sell into this most specialist of markets, particularly to streamline bid-writing and fundraising. Some providers seem to be broadly indigenous to the sector, while some are obviously just external players who haven’t learned the size of the market yet. (Good luck with that…)

Last week on Linkedin there was a bit of a flare up of anger. Resistance to AI seems to be a matter of pride for some. ‘You will always need real writers!’ ‘We need good teams not AI.’ Another guy – a therapist in the VCS – was comparing how much he enjoyed ‘real’ Linkedin social media posts (!!!??)

He just seemed to have a vague feeling of anger and unease he couldn’t quite pin down, except on ‘AI’. There was a bit of a social media pile-on on those flogging systems (in this case, one of the better ones…). We’re right to be cautious about AI sharks. But not so cautious we miss the boat.

I don’t know how often you have to say this for people to listen: Yes. Yes we will need good people and fundraisers. Yes. You need good people not bots. Yes, you need people who can write (they are thin on the ground actually…). Yes AI will not write like you. No AI will not cuddle your dying grandma. No AI will not, cannot replace you.

But there’s also surely a bit of an irony here. What are people suggesting AI does for us? Bid-writing. Something we’ve been complaining about, and the time it takes away from our work, for decades. It’s not even doing the interesting bit of that (for me, the core writing, strategy and development). It’s the drudge work, the repetition we all hate. So…. now we want to keep doing it?

Funders/ buyers are also getting worried. The Cabinet Office has just issued advice that all use of AI should be declared when writing bids. Why? Do you declare spellcheck? Is it a final exam? Are we checking for ability to write bids unaided or to do the work? The spectre of some kind of dishonesty here is not about a technology. The dishonesty comes from claiming to do something you don’t, or claiming abilities you don’t have in the work you do, not in how you write. Funders have been hand-wringing for decades about how smaller entrants to markets don’t have the time or the skills to write the top flight professional bids that win over grants committees or purchasers. And now they’re worried people are cheating on their homework?

The new fundraiser struggled to convey human warmth in his bid-writing.

Straw men

We’re not alone, of course. I’m also a qualitative researcher and there was a lot of bristling when people talked about the potential use of AI at a conference I went to last year. This, despite the fact that computer-aided systems of some sort have been standard in that field for thirty years. A guy did a really good talk, full of possibilities, and pretty realistic. Selling nothing. You felt the mood turn in the room. ‘It can’t work! How dare you!’ A lot of talk about personal data, when he was cleasr that wasn’t even involved. A lot of speakers for the rest of the day, one after the next, used it as the bête noir against which all things must be judged. Understandable in some ways – we’re right to fear for our jobs, especially at entry level. This is not an unrealistic concern in the VCS either. (I think it will be okay in the end – as I’ve made clear elsewhere.)

AI  is Just a Tool

When it comes to tech, I wish we could just try to be realistic, and rather than diving head first into black and white thinking, look at some very objective and immediate benefits. For example, regarding one type of AI: I spent 90 minutes last week teaching people to use a direct to consumer tool (Chat GPT) that costs £20 a month, to help them speed up multiple versions of bids, or as a way to get away from the blank page. A way to bounce ideas around in a fun way. It is useful like Microsoft Excel is useful. And it’s about the same price. I spent the first fifteen minutes telling people all the ways it couldn’t and shouldn’t replace them. I think people started to think I was trying to put them off, or was just making excuses for lack of knowledge. I wasn’t, I just knew what a lot of hype I was working against. Expectations must be managed.

Later that afternoon, one person contacted me to say that an hour after the training she was using it to rewrite her impact report. Others said they were getting a better idea of its strengths as well as its weaknesses. Many had previously tried it and given up – without realising that it is no more magic than Excel, and it needs you to learn how to use it. And let’s be clear: there is a world of difference between someone using Claude AI to summarise their bid for a later PowerPoint, and paying £24k for a system that will do what that does with a few less steps. The Retrieval Augmented Generation systems out there are actually probably using Chat GPT themselves. And all but a few features are available to you off the shelf. If people made time for training, and opened themselves up to possibilities, they wouldn’t even have to consider using predatory tech. But so often they resist that too.

 

VCS tech: a history of resistance

I’ve been here before. A lot. I remember the voluntary sector resisting email, resisting the internet and websites, resisting databases, resisting social media, even railing against texting. 15 years ago, I remember people being incredibly hostile when I was asked to set up a tech startup for St Mungo’s, and was trying to show them how, before affordable smartphones, they could use bulk texting and calling to give clients with no fixed address information, and help them stay in touch.

I remember in one meeting, people were so mean, I cried when I went home. It was said I was trying to dehumanise their clients, trying to take people’s jobs away, what could be better than a person’s voice etc. Claims were made that no clients had mobile phones, while every single client in the room had one. Sigh. Within two years everyone was doing it, because of course we were. Being able to send people text messages to remind them about your advice session isn’t going to stop you talking to people. In fact, it gives you more time to do so. 

More recently, I was looking at how Zoom was being used to provide social activities for people who were housebound. The people who were doing it had found it was a useful thing if there was nothing else. It was a life saver for people who had no alternative. Some clients were (literally) begging not to have it taken away. Meanwhile the people (clients and staff) who didn’t like the idea were focused on removing it for those who did. Organisations who refused to try it with their clients said their clients wouldn’t want it. But they weren’t even willing to ask. At root, its very existence was something they found morally suspect or worse. The fact people usually come to adopt these technologies later is a clue that in some cases, the problem isn’t really the technology at all.

 

What technology ‘means’ for us

Alongside all the practical issues I’ve mentioned, I think a rarely addressed problem underlying it all is what technologies come to ‘mean’. We all do this – in our whole society. Human beings project our wider anxieties about the world onto specific technologies. At every stage, I’ve seen people presenting technology changes, particularly in the helping professions, as fundamentally opposed to what they stand for.

There are a lot of things AI can easily stand in for that we rightly dislike in the helping professions. Like neoliberal economics and governments hell-bent on destroying the social fabric of society. Unfettered industrial capitalism and its depredations on the human spirit. Or the total rejection of the humanistic approach the VCS specialises in in favour of sausage-machine processing. Alongside the reality, I think we are seeing AI as a totem for the things we are angry about in the helping professions. Dehumanisation. Transactionalisation. Cost cutting. Depersonalisation. Industrialisation. And the collapse of our sector after 15 years of austerity. All of which are very real concerns. But they are not coming from AI.

There is a reality here, but there are also a whole range of psychic defences we’re putting up against the massive anxiety of our wider economic, political and social position.The projection (AI and the technologists are the problem). The splitting (VCS good, private sector bad; fundraisers good, AI bad). A new bit of tech seems a lot easier to control, and to jump on, than a lot of bigger and much more damaging change we feel we can’t touch.

And let’s remind ourselves again: we’re just talking about bid-writing – the very thing we’ve been whingeing about for decades. 

So it’s easy to look at some AI start-ups as the devil. But they’re not the devil, they’re just too expensive, and may not even be that good.

Meanwhile, £20 a month for Chat GPT or Claude AI buys me a free fundraising assistant who can rewrite bids ad infinitum and never complains. And that’s why we need a reality check. We need to use it like Microsoft Excel. But please, please, don’t pay £24k for it.

Previous
Previous

Charities: do you know what deep sh*t looks like?

Next
Next

10 points for charities at risk of going under.