6 minute read time.

Countless words have been written and spoken about what is or might go wrong with AI.  How much has been written about AI good practice?  Here are some good practices worth considering…

A thought

Much is written about the ethics, possibilities and risks, even existential fears, associated with Artificial Intelligence.  Less appears to have been written about the best practice in using it now. Taking account of best practice allows the designer and operator of an AI system to create a solid, trustworthy reputation, while keeping to the correct side of the law.

Some would say the key to good AI is to remain focused on representing the customer’s requirements to the AI in a way the AI can understand, albeit remembering humans speak human, AI and computers speak numbers.

The Landscape

Before embarking on an AI project, there are several things to take into account.  One would be to ask, ‘what will it be used for?’  Is the task, for example, safety critical?  Another to question to consider would be, ‘is AI truly necessary for this work, and what benefits (if any) will it bring?’

From here, it is necessary to thoroughly assess the needs of the team and develop and assess its aims and objectives. Alongside this, it is important to understand the AI landscape, the type of AI to be used - be it machine learning, deep learning, large language models or neural networks.  Some AI is good at crunching numbers and working with large datasets.  Others are good with content creation, or facial recognition and recognising objects.  It is important to think about the tasks the AI will be expected to undertake in order to choose the right AI for the job.

These steps will improve the ability of stakeholders to understand what is happening within the workspace and what is being offered to users and customers. Consequently, it will improve decision making while allowing the organisation to innovate and keep abreast of the latest technologies and developments.

Training…the stakeholders

As with any new technology, investing time and money in training is key to using it efficiently, and effectively. To encourage the full engagement of employees in the process, the needs to be transparency about the implementation of any AI strategy.  The features of AI tools should be broken down for each issue to be clarified and resolved. Similarly, being clear on how each department within an organisation will be affected by the introduction of AI is a good start to calming scepticism about its implementation. 

Use, the user and responses

Any AI project will be a lesson in the complexities of everyday life and how humans respond to a wide range of stimuli inputs (often simultaneous) - whilst coping with situations that vary from the trivial to the life threatening.

Designing the AI system with the end users in mind and the experience they will have, will provide a clear gauge for the validity of the AI’s decisions and predictions. Features designed with the appropriate transparency hard wired into it, something that provides both clarity and control, is key to a good user experience.  

It could be appropriate to produce a system that can give one single answer if it is capable of satisfying a range of users and use purposes.   There will be instances where it is better to provide a response that suggests a number of options to the user.

Should such options be employed, how will a user choose the ‘right’ one?  Further thought may need to be given to how each option is weighted, if at all, and how are these options presented to the user?

Of course, not all situations requiring a response are created equal, some are time critical requiring an almost instant response to events.  In this instance, multiple options could be an insurmountable obstacle, given such situations require split second decisions.

Feedback should be modelled early in the process, with iterations and live testing carried out on a small sample of traffic before the system is fully deployed. To increase the number of people who will benefit from an AI project, it will be necessary to create as wide a variety of user perspectives as possible; a wide a range of users and use-case scenarios should be sought out before and during the development of the project.

Having said this, procedures will be needed to create a solution should there be no consensus within the consulted users.  Alternatively, human factors experts could be engaged to help arrive at a solution.

Privacy

Private information should remain private.  As such other early-stage priorities revolve around data protection - minimising the volume of data to be collected, anonymising and protecting it.  This will create confidence in users that their personal information will not be sold, misused or abused by malicious actors.  

Consider regular and precise audits of data practices and security measures & protocols, to give a clear reading of vulnerabilities and gap and the threat to organisational systems and AI tools.

Datamining

To be sure of getting accurate results, we must be sure the data is itself up to the task.  This can require a team of experts to check for factual inaccuracies, missing or biased information, missing values or incorrect labels. It is worth asking if the relationships between the raw data and the predictions we want, what we think they are.  Such checking can mould expectations of results, bearing in mind the limitations and boundaries that are uncovered and cannot be improved.

Training…the AI

Not only do the stakeholders require training, but so does the AI.  There are three stages: initial training, training validation and testing.  In initial training, a large data set is put into the AI so it can begin to ‘learn.’ It is possible to check for any errors that might have crept in. Output from the training data can be checked in the training validation stage, where output is checked against a new set of data, the validation data. When this is complete, the next stage, the testing stage, is where unstructured data is put into the AI.  Should it perform as expected, it is ready for launch.  If not, then retraining must begin again, at square one.

To be clear…

Use clear and concise instructions and prompts for AI tools and systems. Clearly written instructions are more likely to communicate intentions and produce the desired outputs. In turn, this will help make better decisions and reduce errors, make workflow more efficient and reduce error, as well as misunderstandings of the AI by its users.

The truth, the whole truth and nothing but the truth

This does not mean that AI can be left to itself to check facts.  It can make mistakes or use unreliable source material to complete its tasks.  Rather, fact checking the reliability of sources and information becomes pivotal to trustworthy and dependable results.

Keeping an eye…

Living, as we do, in an imperfect world operated by flawed human beings, AI systems will reflect some of that imperfection.  Time should be ‘built into the system’ to allow for double checking that the system is performing as it should, that it hasn’t taken any strange turns while it is being operated.

Short-term fixes will sometimes be necessary to complete a task, but in the long run these will require reassessment and a better, long-term fix.  And before updating a deployed AI, take into account the differences in the deployed model and the candidate model plus how this will affect experience of users and the quality of the system.

It is possible to get Artificial Intelligence right, while it is also possible that this technology can provide substantial and powerful and cutting-edge support to many areas of human endeavour. While many will remain worried and sceptical about its deployment and use, as we have seen, the success of AI is highly dependent on the way it is set up and used by human beings.  Perhaps there lies it’s only major ‘flaw.’

Is there any ‘good practice’ that has been missed from this blogpost and what might it be?