At the start of the meeting, attendees introduced themselves and the organisation they represent.
The chair made opening remarks, saying that the meeting presented an opportunity for public sector leaders to learn from Google - a world leader in the field of data and artificial intelligence.
The first speaker then gave an overview of their company’s existing work within the public sector. The company is currently working with rail franchise holders to process ticket data and previously worked with the Office for National Statistics to support the delivery of the 2021 Census. He went on to praise the unique artificial intelligence ecosystem in the UK, noting vast academic and commercial investment in the technology. This ecosystem can be seen first hand at King’s Cross, London, which is a global hub for AI, led by a large company headquarters.
The first speaker continued, summarising ethical concerns about AI and asking participants to be open about the risks that their organisations have identified. He added that in the private sector, the financial services industry is a particularly advanced user of the technology. Providers are using large language models to either summarise or enrich information. The best solutions are business driven and respond to an acute need, as opposed to standalone solutions invented by technologists, which often fail.
Turning to the public sector, the first speaker argued that AI has the potential to transform communication between governments and citizens, citing an anonymous organisation that found 92% of contacts accepted a response generated by AI. He cautioned that it is important for all organisations to be transparent about their use of AI, noting that this best practice will build trust and is unlikely to deter users.
The first speaker concluded his remarks by saying that he views large language models as a scaling tool, rather than an efficiency tool that will replace jobs currently performed by humans. His company has a lower staff to customer ratio than many traditional businesses, such as banks, largely as a result of its longstanding reliance on AI.
A second speaker asked whether large language models can be used as a creative tool, rather than a means to process existing information. The first speaker answered that a human still needs to review work produced by AI, at least in the short to medium term. Future workers are unlikely to do repetitive tasks, but will instead review output produced by AI.
A third speaker, representing an organisation that handles vast quantities of data, suggested that large language models were similar to junior employees, in the sense that the work they produce needs close review. He added that his organisation has been using AI, within government guidelines, to optimise search functions and sift through large quantities of information. This has been a quick and efficient process, taking just weeks to build and launch models to speed up internal processes.
A third speaker continued, asking whether large language models can be improved if they are only fed with relevant information. This is known as a private corpus as opposed to a public corpus, which describes large language models fed by information widely available online.
A fourth speaker discussed several possible use cases for large language models at another government department. noting potential use cases, including reducing waste from fraud and error, potentially saving billions per year. She added that AI can also be used to optimise information management, ensuring that staff members can do their job effectively by efficiently accessing vast quantities of information that it might take a human many years to process and understand.
However, the fourth speaker added that large language models are only useful if they are fed by accurate data. This is a constraint in the public sector, as public sector data suffers from some limitations. She concluded that, for the foreseeable future, all departmental decision making will be made by humans not AI.
The first speaker re-joined the conversation, noting that the cross-government challenge is how to ensure departmental data is interoperable and accessible. A fifth speaker followed up, saying that the National Data Strategy should implement measures that enable data sharing in government. However, the strategy is guidance rather than mandatory, meaning that barriers to access remain.
The fifth speaker discussed the steps a government directorate is taking to improve data interoperability cross-government, including the formation of a data marketplace. The chair then asked how the market place will function, making the point that departments should be mandated to take part, rather than just being offered incentives.
An earlier speaker said that he feared many departments might enter the data marketplace with unusable data. He concluded that departments would require additional resources to implement quality assurance ahead of any major cross-government data sharing project.
There was a brief discussion about how to guarantee data confidentiality while sharing data between users, such as different government departments. The first speaker identified two solutions. First, a method of analysis known as ‘confidential compute’ enables one organisation to run another organisation’s code on its own system. This gives third parties access to aggregate data without personal data changing hands. Second, ‘federated learning’ involves hosting machine learning algorithms on users’ devices, removing the need to collect and store data.
The first speaker then made a concluding statement, reiterating his thanks to attendees and asking that they contact him with further questions or particular use cases. The chair followed and summarised the session, before also thanking attendees.