top of page
 bbp white logo
  • Writer's pictureClint Lovell

How to make AI more transparent


AI brain, suit, computer, intelligent software

After years of talk, AI has truly arrived – with Terminator-level impact.


Previously it was behind the scenes, but not front of mind. Now it’s not only driving advances across every industry and field of study, but also fallen into the hands of consumers in the form of generative AI. Many of us have been enjoying feeding ambitious (or ridiculous) commands into natural language processing tools like ChatGPT and image generators like MidJourney and Dall-E. After all, who wouldn’t like to see the lyrics to I Should Be So Lucky in the style of Shakespeare?


But it’s also making people anxious, particularly creatives. It’s one of the reasons behind the Hollywood writers’ strike: if you write a blockbuster, how do you stop your work being used to train an AI to write the next one?


AI decision-making is potentially an even bigger problem. A recent BBC Panorama programme highlighted the case of an Uber driver who nearly lost his job because an AI decided in error that he had broken the rules. In financial services there have been accusations of bias when AI has ruled on things like credit and mortgage applications. This is bad enough but in healthcare for instance a wrong decision could mean the difference between life and death.


The trouble is, AI is often a ‘black box’, with no way of finding out how a decision was reached, unlike humans who can explain themselves. This creates fundamental ethical and accountability issues ­– and a lot of mistrust.


So, AI needs to become more transparent, and here are some strategies to do it. Or at least to be aware of if your department or organisation is using AI. Which they probably are or will be. After all, none of us want to be the one who didn’t realise that their oh-so-handy programmatic advertising tool was running ads on decidedly dodgy websites (sorry Google).


1. Explainable AI

Use AI which ensures decision-making is understandable to non-experts through techniques like LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations).

2. Data provenance framework and quality control

Check that training data is high-quality, diverse and unbiased. A data provenance strategy will enable organisations to track data origins, helping to identify potential sources of bias or error.

3. Auditing and Documentation

Document the development process so that it’s possible to track back to the source of any problems. Auditing helps identify areas where transparency can be improved.

4. Ethical Considerations

Establish ethical guidelines for AI development, deployment, and use. These guidelines should address fairness, privacy, and the potential societal impact of AI systems.

5. User-Friendly Interfaces

Build trust by designing user interfaces that serve up AI-generated outcomes in a way that’s easy to understand.

6. External Auditing and Peer Review

Collaborate with external experts, auditors, and researchers to evaluate AI systems for biases, vulnerabilities, and ethical concerns.

7. Education and Training

Educate stakeholders, including developers, users, and decision-makers, about the importance of transparent AI.

8. Regulatory Compliance

Stay informed about emerging AI regulations and compliance requirements and integrate them into AI development processes.


If you need help in communicating your AI strategy, we’re happy to lend a hand.


Featured image by Aleks Dahlberg on Unsplash.



32 views

Comments


bottom of page