One of the great challenges facing cross-border co-operation is proving that it actually works. This is sometimes a difficult problem to explain, as anyone involved in such co-operation is usually pretty convinced that it is a good thing. After all, how can it not be positive to bring people together and improve the living conditions on each side of the border?
Well, of course, that is positive, and the Border Crosser is not going to disagree. But there is a specific difference between knowing in your gut that co-operation works and proving that it does. The INTERREG programmes frequently come under pressure to demonstrate that they are delivering "added value" (such a great phrase - as opposed to subtracted value, I suppose?)
The EU's traditional approach to this question has been to throw indicators at programmes and hope that some stick in a positive manner. The problem with this approach is that successful indicators for regional programmes do not often help in a cross-border context. Numbers of jobs created, improvement in GDP, or increase in tourist numbers, for example, do not really address the issue of whether the co-operation as a whole is working.
There are some ideas out there which have some potential: some of the Nordic programmes have been counting the number of cross-border networks created; this could be combined with the number of such networks which outlast the funding from the programme perhaps. There must be more project level measurements that could be developed along these lines.
Another direction that should be explored is measuring the mechanics of the programmes themselves: number of split decisions in programme committees; length of committee meetings; number of projects which are delayed by more than x months; number of projects with changed partnerships. These are all factors which could be used in measuring the overall success of the cross-border programmes. Any results would probably have to be calculated into a single weighted score, whch would allow comparisons from programme to programme. Such a comparative aspect could be the best way to assess co-operation as a whole.
It's unlikely that there is a perfect system out there. If a programme scored very highly on co-operation, someone would claim that they are so good that they would not any more funding. However, indicators are here to stay and programmes need to start looking at them as an opportunity to demonstrate success, rather than seeing them as an adminstrative burden.
Well, that's far too serious and long a post for a Friday afternoon! Let me know what you think.